Sample records for provide optimal performance

  1. Continuous performance measurement in flight systems. [sequential control model

    NASA Technical Reports Server (NTRS)

    Connelly, E. M.; Sloan, N. A.; Zeskind, R. M.

    1975-01-01

    The desired response of many man machine control systems can be formulated as a solution to an optimal control synthesis problem where the cost index is given and the resulting optimal trajectories correspond to the desired trajectories of the man machine system. Optimal control synthesis provides the reference criteria and the significance of error information required for performance measurement. The synthesis procedure described provides a continuous performance measure (CPM) which is independent of the mechanism generating the control action. Therefore, the technique provides a meaningful method for online evaluation of man's control capability in terms of total man machine performance.

  2. Development and optimization of an energy-regenerative suspension system under stochastic road excitation

    NASA Astrophysics Data System (ADS)

    Huang, Bo; Hsieh, Chen-Yu; Golnaraghi, Farid; Moallem, Mehrdad

    2015-11-01

    In this paper a vehicle suspension system with energy harvesting capability is developed, and an analytical methodology for the optimal design of the system is proposed. The optimization technique provides design guidelines for determining the stiffness and damping coefficients aimed at the optimal performance in terms of ride comfort and energy regeneration. The corresponding performance metrics are selected as root-mean-square (RMS) of sprung mass acceleration and expectation of generated power. The actual road roughness is considered as the stochastic excitation defined by ISO 8608:1995 standard road profiles and used in deriving the optimization method. An electronic circuit is proposed to provide variable damping in the real-time based on the optimization rule. A test-bed is utilized and the experiments under different driving conditions are conducted to verify the effectiveness of the proposed method. The test results suggest that the analytical approach is credible in determining the optimality of system performance.

  3. Program optimizations: The interplay between power, performance, and energy

    DOE PAGES

    Leon, Edgar A.; Karlin, Ian; Grant, Ryan E.; ...

    2016-05-16

    Practical considerations for future supercomputer designs will impose limits on both instantaneous power consumption and total energy consumption. Working within these constraints while providing the maximum possible performance, application developers will need to optimize their code for speed alongside power and energy concerns. This paper analyzes the effectiveness of several code optimizations including loop fusion, data structure transformations, and global allocations. A per component measurement and analysis of different architectures is performed, enabling the examination of code optimizations on different compute subsystems. Using an explicit hydrodynamics proxy application from the U.S. Department of Energy, LULESH, we show how code optimizationsmore » impact different computational phases of the simulation. This provides insight for simulation developers into the best optimizations to use during particular simulation compute phases when optimizing code for future supercomputing platforms. Here, we examine and contrast both x86 and Blue Gene architectures with respect to these optimizations.« less

  4. Noise tolerant illumination optimization applied to display devices

    NASA Astrophysics Data System (ADS)

    Cassarly, William J.; Irving, Bruce

    2005-02-01

    Display devices have historically been designed through an iterative process using numerous hardware prototypes. This process is effective but the number of iterations is limited by the time and cost to make the prototypes. In recent years, virtual prototyping using illumination software modeling tools has replaced many of the hardware prototypes. Typically, the designer specifies the design parameters, builds the software model, predicts the performance using a Monte Carlo simulation, and uses the performance results to repeat this process until an acceptable design is obtained. What is highly desired, and now possible, is to use illumination optimization to automate the design process. Illumination optimization provides the ability to explore a wider range of design options while also providing improved performance. Since Monte Carlo simulations are often used to calculate the system performance but those predictions have statistical uncertainty, the use of noise tolerant optimization algorithms is important. The use of noise tolerant illumination optimization is demonstrated by considering display device designs that extract light using 2D paint patterns as well as 3D textured surfaces. A hybrid optimization approach that combines a mesh feedback optimization with a classical optimizer is demonstrated. Displays with LED sources and cold cathode fluorescent lamps are considered.

  5. Performance evaluation of a health insurance in Nigeria using optimal resource use: health care providers perspectives

    PubMed Central

    2014-01-01

    Background Performance measures are often neglected during the transition period of national health insurance scheme implementation in many low and middle income countries. These measurements evaluate the extent to which various aspects of the schemes meet their key objectives. This study assesses the implementation of a health insurance scheme using optimal resource use domains and examines possible factors that influence each domain, according to providers’ perspectives. Methods A retrospective, cross-sectional survey was done between August and December 2010 in Kaduna state, and 466 health care provider personnel were interviewed. Optimal-resource-use was defined in four domains: provider payment mechanism (capitation and fee-for-service payment methods), benefit package, administrative efficiency, and active monitoring mechanism. Logistic regression analysis was used to identify provider factors that may influence each domain. Results In the provider payment mechanism domain, capitation payment method (95%) performed better than fee-for-service payment method (62%). Benefit package domain performed strongly (97%), while active monitoring mechanism performed weakly (37%). In the administrative efficiency domain, both promptness of referral system (80%) and prompt arrival of funds (93%) performed well. At the individual level, providers with fewer enrolees encountered difficulties with reimbursement. Other factors significantly influenced each of the optimal-resource-use domains. Conclusions Fee-for-service payment method and claims review, in the provider payment and active monitoring mechanisms, respectively, performed weakly according to the providers’ (at individual-level) perspectives. A short-fall on the supply-side of health insurance could lead to a direct or indirect adverse effect on the demand-side of the scheme. Capitation payment per enrolees should be revised to conform to economic circumstances. Performance indicators and providers’ characteristics and experiences associated with resource use can assist policy makers to monitor and evaluate health insurance implementation. PMID:24628889

  6. Adjustable control station with movable monitors and cameras for viewing systems in robotics and teleoperations

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1994-01-01

    Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.

  7. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, William; Laird, Carl; Siirola, John

    Pyomo provides a rich software environment for formulating and analyzing optimization applications. Pyomo supports the algebraic specification of complex sets of objectives and constraints, which enables optimization solvers to exploit problem structure to efficiently perform optimization.

  9. Taguchi optimization of bismuth-telluride based thermoelectric cooler

    NASA Astrophysics Data System (ADS)

    Anant Kishore, Ravi; Kumar, Prashant; Sanghadasa, Mohan; Priya, Shashank

    2017-07-01

    In the last few decades, considerable effort has been made to enhance the figure-of-merit (ZT) of thermoelectric (TE) materials. However, the performance of commercial TE devices still remains low due to the fact that the module figure-of-merit not only depends on the material ZT, but also on the operating conditions and configuration of TE modules. This study takes into account comprehensive set of parameters to conduct the numerical performance analysis of the thermoelectric cooler (TEC) using a Taguchi optimization method. The Taguchi method is a statistical tool that predicts the optimal performance with a far less number of experimental runs than the conventional experimental techniques. Taguchi results are also compared with the optimized parameters obtained by a full factorial optimization method, which reveals that the Taguchi method provides optimum or near-optimum TEC configuration using only 25 experiments against 3125 experiments needed by the conventional optimization method. This study also shows that the environmental factors such as ambient temperature and cooling coefficient do not significantly affect the optimum geometry and optimum operating temperature of TECs. The optimum TEC configuration for simultaneous optimization of cooling capacity and coefficient of performance is also provided.

  10. Optimization of an Active Twist Rotor Blade Planform for Improved Active Response and Forward Flight Performance

    NASA Technical Reports Server (NTRS)

    Sekula, Martin K; Wilbur, Matthew L.

    2014-01-01

    A study was conducted to identify the optimum blade tip planform for a model-scale active twist rotor. The analysis identified blade tip design traits which simultaneously reduce rotor power of an unactuated rotor while leveraging aeromechanical couplings to tailor the active response of the blade. Optimizing the blade tip planform for minimum rotor power in forward flight provided a 5 percent improvement in performance compared to a rectangular blade tip, but reduced the vibration control authority of active twist actuation by 75 percent. Optimizing for maximum blade twist response increased the vibration control authority by 50 percent compared to the rectangular blade tip, with little effect on performance. Combined response and power optimization resulted in a blade tip design which provided similar vibration control authority to the rectangular blade tip, but with a 3.4 percent improvement in rotor performance in forward flight.

  11. Testing the Limits of Optimizing Dual-Task Performance in Younger and Older Adults

    PubMed Central

    Strobach, Tilo; Frensch, Peter; Müller, Herrmann Josef; Schubert, Torsten

    2012-01-01

    Impaired dual-task performance in younger and older adults can be improved with practice. Optimal conditions even allow for a (near) elimination of this impairment in younger adults. However, it is unknown whether such (near) elimination is the limit of performance improvements in older adults. The present study tests this limit in older adults under conditions of (a) a high amount of dual-task training and (b) training with simplified component tasks in dual-task situations. The data showed that a high amount of dual-task training in older adults provided no evidence for an improvement of dual-task performance to the optimal dual-task performance level achieved by younger adults. However, training with simplified component tasks in dual-task situations exclusively in older adults provided a similar level of optimal dual-task performance in both age groups. Therefore through applying a testing the limits approach, we demonstrated that older adults improved dual-task performance to the same level as younger adults at the end of training under very specific conditions. PMID:22408613

  12. R&D 100, 2016: Pyomo 4.0 – Python Optimization Modeling Objects

    ScienceCinema

    Hart, William; Laird, Carl; Siirola, John

    2018-06-13

    Pyomo provides a rich software environment for formulating and analyzing optimization applications. Pyomo supports the algebraic specification of complex sets of objectives and constraints, which enables optimization solvers to exploit problem structure to efficiently perform optimization.

  13. Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Jackson, Lisa

    2016-10-01

    In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.

  14. Optimal Doppler centroid estimation for SAR data from a quasi-homogeneous source

    NASA Technical Reports Server (NTRS)

    Jin, M. Y.

    1986-01-01

    This correspondence briefly describes two Doppler centroid estimation (DCE) algorithms, provides a performance summary for these algorithms, and presents the experimental results. These algorithms include that of Li et al. (1985) and a newly developed one that is optimized for quasi-homogeneous sources. The performance enhancement achieved by the optimal DCE algorithm is clearly demonstrated by the experimental results.

  15. Constrained optimization via simulation models for new product innovation

    NASA Astrophysics Data System (ADS)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  16. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  17. Optimal Operation of a Thermal Energy Storage Tank Using Linear Optimization

    NASA Astrophysics Data System (ADS)

    Civit Sabate, Carles

    In this thesis, an optimization procedure for minimizing the operating costs of a Thermal Energy Storage (TES) tank is presented. The facility in which the optimization is based is the combined cooling, heating, and power (CCHP) plant at the University of California, Irvine. TES tanks provide the ability of decoupling the demand of chilled water from its generation, over the course of a day, from the refrigeration and air-conditioning plants. They can be used to perform demand-side management, and optimization techniques can help to approach their optimal use. The proposed optimization approach provides a fast and reliable methodology of finding the optimal use of the TES tank to reduce energy costs and provides a tool for future implementation of optimal control laws on the system. Advantages of the proposed methodology are studied using simulation with historical data.

  18. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  19. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  20. Multi-objective optimization for generating a weighted multi-model ensemble

    NASA Astrophysics Data System (ADS)

    Lee, H.

    2017-12-01

    Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.

  1. Positivity in healthcare: relation of optimism to performance.

    PubMed

    Luthans, Kyle W; Lebsack, Sandra A; Lebsack, Richard R

    2008-01-01

    The purpose of this paper is to explore the linkage between nurses' levels of optimism and performance outcomes. The study sample consisted of 78 nurses in all areas of a large healthcare facility (hospital) in the Midwestern United States. The participants completed surveys to determine their current state of optimism. Supervisory performance appraisal data were gathered in order to measure performance outcomes. Spearman correlations and a one-way ANOVA were used to analyze the data. The results indicated a highly significant positive relationship between the nurses' measured state of optimism and their supervisors' ratings of their commitment to the mission of the hospital, a measure of contribution to increasing customer satisfaction, and an overall measure of work performance. This was an exploratory study. Larger sample sizes and longitudinal data would be beneficial because it is probable that state optimism levels will vary and that it might be more accurate to measure state optimism at several points over time in order to better predict performance outcomes. Finally, the study design does not imply causation. Suggestions for effectively developing and managing nurses' optimism to positively impact their performance are provided. To date, there has been very little empirical evidence assessing the impact that positive psychological capacities such as optimism of key healthcare professionals may have on performance. This paper was designed to help begin to fill this void by examining the relationship between nurses' self-reported optimism and their supervisors' evaluations of their performance.

  2. Managing imperfect competition by pay for performance and reference pricing.

    PubMed

    Mak, Henry Y

    2018-01-01

    I study a managed health service market where differentiated providers compete for consumers by choosing multiple service qualities, and where copayments that consumers pay and payments that providers receive for services are set by a payer. The optimal regulation scheme is two-sided. On the demand side, it justifies and clarifies value-based reference pricing. On the supply side, it prescribes pay for performance when consumers misperceive service benefits or providers have intrinsic quality incentives. The optimal bonuses are expressed in terms of demand elasticities, service technology, and provider characteristics. However, pay for performance may not outperform prospective payment when consumers are rational and providers are profit maximizing, or when one of the service qualities is not contractible. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Optimization of MLS receivers for multipath environments

    NASA Technical Reports Server (NTRS)

    Mcalpine, G. A.; Irwin, S. H.; NELSON; Roleyni, G.

    1977-01-01

    Optimal design studies of MLS angle-receivers and a theoretical design-study of MLS DME-receivers are reported. The angle-receiver results include an integration of the scan data processor and tracking filter components of the optimal receiver into a unified structure. An extensive simulation study comparing the performance of the optimal and threshold receivers in a wide variety of representative dynamical interference environments was made. The optimal receiver was generally superior. A simulation of the performance of the threshold and delay-and-compare receivers in various signal environments was performed. An analysis of combined errors due to lateral reflections from vertical structures with small differential path delays, specular ground reflections with neglible differential path delays, and thermal noise in the receivers is provided.

  4. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  5. Control-enhanced multiparameter quantum estimation

    NASA Astrophysics Data System (ADS)

    Liu, Jing; Yuan, Haidong

    2017-10-01

    Most studies in multiparameter estimation assume the dynamics is fixed and focus on identifying the optimal probe state and the optimal measurements. In practice, however, controls are usually available to alter the dynamics, which provides another degree of freedom. In this paper we employ optimal control methods, particularly the gradient ascent pulse engineering (GRAPE), to design optimal controls for the improvement of the precision limit in multiparameter estimation. We show that the controlled schemes are not only capable to provide a higher precision limit, but also have a higher stability to the inaccuracy of the time point performing the measurements. This high time stability will benefit the practical metrology, where it is hard to perform the measurement at a very accurate time point due to the response time of the measurement apparatus.

  6. Optimization of segmented thermoelectric generator using Taguchi and ANOVA techniques.

    PubMed

    Kishore, Ravi Anant; Sanghadasa, Mohan; Priya, Shashank

    2017-12-01

    Recent studies have demonstrated that segmented thermoelectric generators (TEGs) can operate over large thermal gradient and thus provide better performance (reported efficiency up to 11%) as compared to traditional TEGs, comprising of single thermoelectric (TE) material. However, segmented TEGs are still in early stages of development due to the inherent complexity in their design optimization and manufacturability. In this study, we demonstrate physics based numerical techniques along with Analysis of variance (ANOVA) and Taguchi optimization method for optimizing the performance of segmented TEGs. We have considered comprehensive set of design parameters, such as geometrical dimensions of p-n legs, height of segmentation, hot-side temperature, and load resistance, in order to optimize output power and efficiency of segmented TEGs. Using the state-of-the-art TE material properties and appropriate statistical tools, we provide near-optimum TEG configuration with only 25 experiments as compared to 3125 experiments needed by the conventional optimization methods. The effect of environmental factors on the optimization of segmented TEGs is also studied. Taguchi results are validated against the results obtained using traditional full factorial optimization technique and a TEG configuration for simultaneous optimization of power and efficiency is obtained.

  7. Progress Toward Adaptive Integration and Optimization of Automated and Neural Processing Systems: Establishing Neural and Behavioral Benchmarks of Optimized Performance

    DTIC Science & Technology

    2014-11-01

    Paradigm ............................................................................19 3.4 Collaborative BCI for Improving Overall Performance...interfaces ( BCIs ) provide the biggest improvement in performance? Can we demonstrate clear advantages with BCIs ? 2 2. Simulator Development and...stimuli in real time. Fig. 18 ROC curves for each subject after the combination of 2 trials 3.4 Collaborative BCI for Improving Overall

  8. Encoder-Decoder Optimization for Brain-Computer Interfaces

    PubMed Central

    Merel, Josh; Pianto, Donald M.; Cunningham, John P.; Paninski, Liam

    2015-01-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages. PMID:26029919

  9. Encoder-decoder optimization for brain-computer interfaces.

    PubMed

    Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam

    2015-06-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  10. Human Performance on the Traveling Salesman and Related Problems: A Review

    ERIC Educational Resources Information Center

    MacGregor, James N.; Chu, Yun

    2011-01-01

    The article provides a review of recent research on human performance on the traveling salesman problem (TSP) and related combinatorial optimization problems. We discuss what combinatorial optimization problems are, why they are important, and why they may be of interest to cognitive scientists. We next describe the main characteristics of human…

  11. Finite-horizon control-constrained nonlinear optimal control using single network adaptive critics.

    PubMed

    Heydari, Ali; Balakrishnan, Sivasubramanya N

    2013-01-01

    To synthesize fixed-final-time control-constrained optimal controllers for discrete-time nonlinear control-affine systems, a single neural network (NN)-based controller called the Finite-horizon Single Network Adaptive Critic is developed in this paper. Inputs to the NN are the current system states and the time-to-go, and the network outputs are the costates that are used to compute optimal feedback control. Control constraints are handled through a nonquadratic cost function. Convergence proofs of: 1) the reinforcement learning-based training method to the optimal solution; 2) the training error; and 3) the network weights are provided. The resulting controller is shown to solve the associated time-varying Hamilton-Jacobi-Bellman equation and provide the fixed-final-time optimal solution. Performance of the new synthesis technique is demonstrated through different examples including an attitude control problem wherein a rigid spacecraft performs a finite-time attitude maneuver subject to control bounds. The new formulation has great potential for implementation since it consists of only one NN with single set of weights and it provides comprehensive feedback solutions online, though it is trained offline.

  12. Particle Swarm Optimization Toolbox

    NASA Technical Reports Server (NTRS)

    Grant, Michael J.

    2010-01-01

    The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry trajectory and guidance design for the Mars Science Laboratory mission but may be applied to any optimization problem.

  13. Application of controller partitioning optimization procedure to integrated flight/propulsion control design for a STOVL aircraft

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Schmidt, Phillip H.

    1993-01-01

    A parameter optimization framework has earlier been developed to solve the problem of partitioning a centralized controller into a decentralized, hierarchical structure suitable for integrated flight/propulsion control implementation. This paper presents results from the application of the controller partitioning optimization procedure to IFPC design for a Short Take-Off and Vertical Landing (STOVL) aircraft in transition flight. The controller partitioning problem and the parameter optimization algorithm are briefly described. Insight is provided into choosing various 'user' selected parameters in the optimization cost function such that the resulting optimized subcontrollers will meet the characteristics of the centralized controller that are crucial to achieving the desired closed-loop performance and robustness, while maintaining the desired subcontroller structure constraints that are crucial for IFPC implementation. The optimization procedure is shown to improve upon the initial partitioned subcontrollers and lead to performance comparable to that achieved with the centralized controller. This application also provides insight into the issues that should be addressed at the centralized control design level in order to obtain implementable partitioned subcontrollers.

  14. A design optimization process for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Chamberlain, Robert G.; Fox, George; Duquette, William H.

    1990-01-01

    The Space Station Freedom Program is used to develop and implement a process for design optimization. Because the relative worth of arbitrary design concepts cannot be assessed directly, comparisons must be based on designs that provide the same performance from the point of view of station users; such designs can be compared in terms of life cycle cost. Since the technology required to produce a space station is widely dispersed, a decentralized optimization process is essential. A formulation of the optimization process is provided and the mathematical models designed to facilitate its implementation are described.

  15. Comparison of global optimization approaches for robust calibration of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Jung, I. W.

    2015-12-01

    Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  16. Parameterization of the InVEST Crop Pollination Model to spatially predict abundance of wild blueberry (Vaccinium angustifolium Aiton) native bee pollinators in Maine, USA

    USGS Publications Warehouse

    Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.

    2016-01-01

    Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.

  17. A Task-Optimized Neural Network Replicates Human Auditory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy.

    PubMed

    Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H

    2018-05-02

    A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  19. Dialogues in Performance: A Team-Taught Course on the Afterlife in the Classical and Italian Traditions

    ERIC Educational Resources Information Center

    Gosetti-Murrayjohn, Angela; Schneider, Federico

    2009-01-01

    This article provides a reflection on a team-teaching experience in which performative dialogues between co-instructors and among students provided a pedagogical framework within which comparative analysis of textual traditions within the classical tradition could be optimized. Performative dialogues thus provided a model for and enactment of…

  20. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  1. System Analysis and Performance Benefits of an Optimized Rotorcraft Propulsion System

    NASA Technical Reports Server (NTRS)

    Bruckner, Robert J.

    2007-01-01

    The propulsion system of rotorcraft vehicles is the most critical system to the vehicle in terms of safety and performance. The propulsion system must provide both vertical lift and forward flight propulsion during the entire mission. Whereas propulsion is a critical element for all flight vehicles, it is particularly critical for rotorcraft due to their limited safe, un-powered landing capability. This unparalleled reliability requirement has led rotorcraft power plants down a certain evolutionary path in which the system looks and performs quite similarly to those of the 1960 s. By and large the advancements in rotorcraft propulsion have come in terms of safety and reliability and not in terms of performance. The concept of the optimized propulsion system is a means by which both reliability and performance can be improved for rotorcraft vehicles. The optimized rotorcraft propulsion system which couples an oil-free turboshaft engine to a highly loaded gearbox that provides axial load support for the power turbine can be designed with current laboratory proven technology. Such a system can provide up to 60% weight reduction of the propulsion system of rotorcraft vehicles. Several technical challenges are apparent at the conceptual design level and should be addressed with current research.

  2. USMC Inventory Control Using Optimization Modeling and Discrete Event Simulation

    DTIC Science & Technology

    2016-09-01

    release. Distribution is unlimited. USMC INVENTORY CONTROL USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION by Timothy A. Curling...USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION 5. FUNDING NUMBERS 6. AUTHOR(S) Timothy A. Curling 7. PERFORMING ORGANIZATION NAME(S...optimization and discrete -event simulation. This construct can potentially provide an effective means in improving order management decisions. However

  3. Human-Machine Collaborative Optimization via Apprenticeship Scheduling

    DTIC Science & Technology

    2016-09-09

    prenticeship Scheduling (COVAS), which performs ma- chine learning using human expert demonstration, in conjunction with optimization, to automatically and ef...ficiently produce optimal solutions to challenging real- world scheduling problems. COVAS first learns a policy from human scheduling demonstration via...apprentice- ship learning , then uses this initial solution to provide a tight bound on the value of the optimal solution, thereby substantially

  4. Sport nutrition for young athletes

    PubMed Central

    Purcell, Laura K

    2013-01-01

    Nutrition is an important part of sport performance for young athletes, in addition to allowing for optimal growth and development. Macronutrients, micronutrients and fluids in the proper amounts are essential to provide energy for growth and activity. To optimize performance, young athletes need to learn what, when and how to eat and drink before, during and after activity. PMID:24421690

  5. Partial Storage Optimization and Load Control Strategy of Cloud Data Centers

    PubMed Central

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444

  6. Partial storage optimization and load control strategy of cloud data centers.

    PubMed

    Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.

  7. High-Fidelity Aerostructural Design Optimization of Transport Aircraft with Continuous Morphing Trailing Edge Technology

    NASA Astrophysics Data System (ADS)

    Burdette, David A., Jr.

    Adaptive morphing trailing edge technology offers the potential to decrease the fuel burn of transonic commercial transport aircraft by allowing wings to dynamically adjust to changing flight conditions. Current configurations allow flap and aileron droop; however, this approach provides limited degrees of freedom and increased drag produced by gaps in the wing's surface. Leading members in the aeronautics community including NASA, AFRL, Boeing, and a number of academic institutions have extensively researched morphing technology for its potential to improve aircraft efficiency. With modern computational tools it is possible to accurately and efficiently model aircraft configurations in order to quantify the efficiency improvements offered by mor- phing technology. Coupled high-fidelity aerodynamic and structural solvers provide the capability to model and thoroughly understand the nuanced trade-offs involved in aircraft design. This capability is important for a detailed study of the capabilities of morphing trailing edge technology. Gradient-based multidisciplinary design opti- mization provides the ability to efficiently traverse design spaces and optimize the trade-offs associated with the design. This thesis presents a number of optimization studies comparing optimized config- urations with and without morphing trailing edge devices. The baseline configuration used throughout this work is the NASA Common Research Model. The first opti- mization comparison considers the optimal fuel burn predicted by the Breguet range equation at a single cruise point. This initial singlepoint optimization comparison demonstrated a limited fuel burn savings of less than 1%. Given the effectiveness of the passive aeroelastic tailoring in the optimized non-morphing wing, the singlepoint optimization offered limited potential for morphing technology to provide any bene- fit. To provide a more appropriate comparison, a number of multipoint optimizations were performed. With a 3-point stencil, the morphing wing burned 2.53% less fuel than its optimized non-morphing counterpart. Expanding further to a 7-point stencil, the morphing wing used 5.04% less fuel. Additional studies demonstrate that the size of the morphing device can be reduced without sizable performance reductions, and that as aircraft wings' aspect ratios increase, the effectiveness of morphing trailing edge devices increases. The final set of studies in this thesis consider mission analy- sis, including climb, multi-altitude cruise, and descent. These mission analyses were performed with a number of surrogate models, trained with O(100) optimizations. These optimizations demonstrated fuel burn reductions as large as 5% at off-design conditions. The fuel burn predicted by the mission analysis was up to 2.7% lower for the morphing wing compared to the conventional configuration.

  8. Effect analysis of design variables on the disc in a double-eccentric butterfly valve.

    PubMed

    Kang, Sangmo; Kim, Da-Eun; Kim, Kuk-Kyeom; Kim, Jun-Oh

    2014-01-01

    We have performed a shape optimization of the disc in an industrial double-eccentric butterfly valve using the effect analysis of design variables to enhance the valve performance. For the optimization, we select three performance quantities such as pressure drop, maximum stress, and mass (weight) as the responses and three dimensions regarding the disc shape as the design variables. Subsequently, we compose a layout of orthogonal array (L16) by performing numerical simulations on the flow and structure using a commercial package, ANSYS v13.0, and then make an effect analysis of the design variables on the responses using the design of experiments. Finally, we formulate a multiobjective function consisting of the three responses and then propose an optimal combination of the design variables to maximize the valve performance. Simulation results show that the disc thickness makes the most significant effect on the performance and the optimal design provides better performance than the initial design.

  9. Unrealistic optimism in advice taking: A computational account.

    PubMed

    Leong, Yuan Chang; Zaki, Jamil

    2018-02-01

    Expert advisors often make surprisingly inaccurate predictions about the future, yet people heed their suggestions nonetheless. Here we provide a novel, computational account of this unrealistic optimism in advice taking. Across 3 studies, participants observed as advisors predicted the performance of a stock. Advisors varied in their accuracy, performing reliably above, at, or below chance. Despite repeated feedback, participants exhibited inflated perceptions of advisors' accuracy, and reliably "bet" on advisors' predictions more than their performance warranted. Participants' decisions tightly tracked a computational model that makes 2 assumptions: (a) people hold optimistic initial expectations about advisors, and (b) people preferentially incorporate information that adheres to their expectations when learning about advisors. Consistent with model predictions, explicitly manipulating participants' initial expectations altered their optimism bias and subsequent advice-taking. With well-calibrated initial expectations, participants no longer exhibited an optimism bias. We then explored crowdsourced ratings as a strategy to curb unrealistic optimism in advisors. Star ratings for each advisor were collected from an initial group of participants, which were then shown to a second group of participants. Instead of calibrating expectations, these ratings propagated and exaggerated the unrealistic optimism. Our results provide a computational account of the cognitive processes underlying inflated perceptions of expertise, and explore the boundary conditions under which they occur. We discuss the adaptive value of this optimism bias, and how our account can be extended to explain unrealistic optimism in other domains. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. Urine sampling and collection system optimization and testing

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Geating, J. A.; Koesterer, M. G.

    1975-01-01

    A Urine Sampling and Collection System (USCS) engineering model was developed to provide for the automatic collection, volume sensing and sampling of urine from each micturition. The purpose of the engineering model was to demonstrate verification of the system concept. The objective of the optimization and testing program was to update the engineering model, to provide additional performance features and to conduct system testing to determine operational problems. Optimization tasks were defined as modifications to minimize system fluid residual and addition of thermoelectric cooling.

  11. Motor unit recruitment by size does not provide functional advantages for motor performance

    PubMed Central

    Dideriksen, Jakob L; Farina, Dario

    2013-01-01

    It is commonly assumed that the orderly recruitment of motor units by size provides a functional advantage for the performance of movements compared with a random recruitment order. On the other hand, the excitability of a motor neuron depends on its size and this is intrinsically linked to its innervation number. A range of innervation numbers among motor neurons corresponds to a range of sizes and thus to a range of excitabilities ordered by size. Therefore, if the excitation drive is similar among motor neurons, the recruitment by size is inevitably due to the intrinsic properties of motor neurons and may not have arisen to meet functional demands. In this view, we tested the assumption that orderly recruitment is necessarily beneficial by determining if this type of recruitment produces optimal motor output. Using evolutionary algorithms and without any a priori assumptions, the parameters of neuromuscular models were optimized with respect to several criteria for motor performance. Interestingly, the optimized model parameters matched well known neuromuscular properties, but none of the optimization criteria determined a consistent recruitment order by size unless this was imposed by an association between motor neuron size and excitability. Further, when the association between size and excitability was imposed, the resultant model of recruitment did not improve the motor performance with respect to the absence of orderly recruitment. A consistent observation was that optimal solutions for a variety of criteria of motor performance always required a broad range of innervation numbers in the population of motor neurons, skewed towards the small values. These results indicate that orderly recruitment of motor units in itself does not provide substantial functional advantages for motor control. Rather, the reason for its near-universal presence in human movements is that motor functions are optimized by a broad range of innervation numbers. PMID:24144879

  12. Motor unit recruitment by size does not provide functional advantages for motor performance.

    PubMed

    Dideriksen, Jakob L; Farina, Dario

    2013-12-15

    It is commonly assumed that the orderly recruitment of motor units by size provides a functional advantage for the performance of movements compared with a random recruitment order. On the other hand, the excitability of a motor neuron depends on its size and this is intrinsically linked to its innervation number. A range of innervation numbers among motor neurons corresponds to a range of sizes and thus to a range of excitabilities ordered by size. Therefore, if the excitation drive is similar among motor neurons, the recruitment by size is inevitably due to the intrinsic properties of motor neurons and may not have arisen to meet functional demands. In this view, we tested the assumption that orderly recruitment is necessarily beneficial by determining if this type of recruitment produces optimal motor output. Using evolutionary algorithms and without any a priori assumptions, the parameters of neuromuscular models were optimized with respect to several criteria for motor performance. Interestingly, the optimized model parameters matched well known neuromuscular properties, but none of the optimization criteria determined a consistent recruitment order by size unless this was imposed by an association between motor neuron size and excitability. Further, when the association between size and excitability was imposed, the resultant model of recruitment did not improve the motor performance with respect to the absence of orderly recruitment. A consistent observation was that optimal solutions for a variety of criteria of motor performance always required a broad range of innervation numbers in the population of motor neurons, skewed towards the small values. These results indicate that orderly recruitment of motor units in itself does not provide substantial functional advantages for motor control. Rather, the reason for its near-universal presence in human movements is that motor functions are optimized by a broad range of innervation numbers.

  13. Power-constrained supercomputing

    NASA Astrophysics Data System (ADS)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels, from the processor level to the cluster level.

  14. The extension of the thermal-vacuum test optimization program to multiple flights

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Byrd, J.

    1981-01-01

    The thermal vacuum test optimization model developed to provide an approach to the optimization of a test program based on prediction of flight performance with a single flight option in mind is extended to consider reflight as in space shuttle missions. The concept of 'utility', developed under the name of 'availability', is used to follow performance through the various options encountered when the capabilities of reflight and retrievability of space shuttle are available. Also, a 'lost value' model is modified to produce a measure of the probability of a mission's success, achieving a desired utility using a minimal cost test strategy. The resulting matrix of probabilities and their associated costs provides a means for project management to evaluate various test and reflight strategies.

  15. Addressing forecast uncertainty impact on CSP annual performance

    NASA Astrophysics Data System (ADS)

    Ferretti, Fabio; Hogendijk, Christopher; Aga, Vipluv; Ehrsam, Andreas

    2017-06-01

    This work analyzes the impact of weather forecast uncertainty on the annual performance of a Concentrated Solar Power (CSP) plant. Forecast time series has been produced by a commercial forecast provider using the technique of hindcasting for the full year 2011 in hourly resolution for Ouarzazate, Morocco. Impact of forecast uncertainty has been measured on three case studies, representing typical tariff schemes observed in recent CSP projects plus a spot market price scenario. The analysis has been carried out using an annual performance model and a standard dispatch optimization algorithm based on dynamic programming. The dispatch optimizer has been demonstrated to be a key requisite to maximize the annual revenues depending on the price scenario, harvesting the maximum potential out of the CSP plant. Forecasting uncertainty affects the revenue enhancement outcome of a dispatch optimizer depending on the error level and the price function. Results show that forecasting accuracy of direct solar irradiance (DNI) is important to make best use of an optimized dispatch but also that a higher number of calculation updates can partially compensate this uncertainty. Improvement in revenues can be significant depending on the price profile and the optimal operation strategy. Pathways to achieve better performance are presented by having more updates both by repeatedly generating new optimized trajectories but also more often updating weather forecasts. This study shows the importance of working on DNI weather forecasting for revenue enhancement as well as selecting weather services that can provide multiple updates a day and probabilistic forecast information.

  16. Initial Ares I Bending Filter Design

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Bedrossian, Nazareth; Hall, Robert; Norris, H. Lee; Hall, Charles; Jackson, Mark

    2007-01-01

    The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output will be required to ensure control system stability and adequate performance. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The filter design methodology was based on a numerical constrained optimization approach to maximize stability margins while meeting performance requirements. The resulting bending filter designs achieved stability by adding lag to the first structural frequency and hence phase stabilizing the first Ares-I flex mode. To minimize rigid body performance impacts, a priority was placed via constraints in the optimization algorithm to minimize bandwidth decrease with the addition of the bending filters. The bending filters provided here have been demonstrated to provide a stable first stage control system in both the frequency domain and the MSFC MAVERIC time domain simulation.

  17. An expert system for integrated structural analysis and design optimization for aerospace structures

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.

  18. An expert system for integrated structural analysis and design optimization for aerospace structures

    NASA Astrophysics Data System (ADS)

    1992-04-01

    The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.

  19. A seismic optimization procedure for reinforced concrete framed buildings based on eigenfrequency optimization

    NASA Astrophysics Data System (ADS)

    Arroyo, Orlando; Gutiérrez, Sergio

    2017-07-01

    Several seismic optimization methods have been proposed to improve the performance of reinforced concrete framed (RCF) buildings; however, they have not been widely adopted among practising engineers because they require complex nonlinear models and are computationally expensive. This article presents a procedure to improve the seismic performance of RCF buildings based on eigenfrequency optimization, which is effective, simple to implement and efficient. The method is used to optimize a 10-storey regular building, and its effectiveness is demonstrated by nonlinear time history analyses, which show important reductions in storey drifts and lateral displacements compared to a non-optimized building. A second example for an irregular six-storey building demonstrates that the method provides benefits to a wide range of RCF structures and supports the applicability of the proposed method.

  20. Optimizing the Reliability and Performance of Service Composition Applications with Fault Tolerance in Wireless Sensor Networks

    PubMed Central

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang

    2015-01-01

    The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818

  1. Optimal Reference Strain Structure for Studying Dynamic Responses of Flexible Rockets

    NASA Technical Reports Server (NTRS)

    Tsushima, Natsuki; Su, Weihua; Wolf, Michael G.; Griffin, Edwin D.; Dumoulin, Marie P.

    2017-01-01

    In the proposed paper, the optimal design of reference strain structures (RSS) will be performed targeting for the accurate observation of the dynamic bending and torsion deformation of a flexible rocket. It will provide the detailed description of the finite-element (FE) model of a notional flexible rocket created in MSC.Patran. The RSS will be attached longitudinally along the side of the rocket and to track the deformation of the thin-walled structure under external loads. An integrated surrogate-based multi-objective optimization approach will be developed to find the optimal design of the RSS using the FE model. The Kriging method will be used to construct the surrogate model. For the data sampling and the performance evaluation, static/transient analyses will be performed with MSC.Natran/Patran. The multi-objective optimization will be solved with NSGA-II to minimize the difference between the strains of the launch vehicle and RSS. Finally, the performance of the optimal RSS will be evaluated by checking its strain-tracking capability in different numerical simulations of the flexible rocket.

  2. Theoretical and experimental comparative analysis of beamforming methods for loudspeaker arrays under given performance constraints

    NASA Astrophysics Data System (ADS)

    Olivieri, Ferdinando; Fazi, Filippo Maria; Nelson, Philip A.; Shin, Mincheol; Fontana, Simone; Yue, Lang

    2016-07-01

    Methods for beamforming are available that provide the signals used to drive an array of sources for the implementation of systems for the so-called personal audio. In this work, performance of the delay-and-sum (DAS) method and of three widely used methods for optimal beamforming are compared by means of computer simulations and experiments in an anechoic environment using a linear array of sources with given constraints on quality of the reproduced field at the listener's position and limit to input energy to the array. Using the DAS method as a benchmark for performance, the frequency domain responses of the loudspeaker filters can be characterized in three regions. In the first region, at low frequencies, input signals designed with the optimal methods are identical and provide higher directivity performance than that of the DAS. In the second region, performance of the optimal methods are similar to the DAS method. The third region starts above the limit due to spatial aliasing. A method is presented to estimate the boundaries of these regions.

  3. Thermofluid Analysis of Magnetocaloric Refrigeration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdelaziz, Omar; Gluesenkamp, Kyle R; Vineyard, Edward Allan

    While there have been extensive studies on thermofluid characteristics of different magnetocaloric refrigeration systems, a conclusive optimization study using non-dimensional parameters which can be applied to a generic system has not been reported yet. In this study, a numerical model has been developed for optimization of active magnetic refrigerator (AMR). This model is computationally efficient and robust, making it appropriate for running the thousands of simulations required for parametric study and optimization. The governing equations have been non-dimensionalized and numerically solved using finite difference method. A parametric study on a wide range of non-dimensional numbers has been performed. While themore » goal of AMR systems is to improve the performance of competitive parameters including COP, cooling capacity and temperature span, new parameters called AMR performance index-1 have been introduced in order to perform multi objective optimization and simultaneously exploit all these parameters. The multi-objective optimization is carried out for a wide range of the non-dimensional parameters. The results of this study will provide general guidelines for designing high performance AMR systems.« less

  4. Perform - A performance optimizing computer program for dynamic systems subject to transient loadings

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Wang, B. P.; Yoo, Y.; Clark, B.

    1973-01-01

    A description and applications of a computer capability for determining the ultimate optimal behavior of a dynamically loaded structural-mechanical system are presented. This capability provides characteristics of the theoretically best, or limiting, design concept according to response criteria dictated by design requirements. Equations of motion of the system in first or second order form include incompletely specified elements whose characteristics are determined in the optimization of one or more performance indices subject to the response criteria in the form of constraints. The system is subject to deterministic transient inputs, and the computer capability is designed to operate with a large linear programming on-the-shelf software package which performs the desired optimization. The report contains user-oriented program documentation in engineering, problem-oriented form. Applications cover a wide variety of dynamics problems including those associated with such diverse configurations as a missile-silo system, impacting freight cars, and an aircraft ride control system.

  5. The value of compressed air energy storage in energy and reserve markets

    DOE PAGES

    Drury, Easan; Denholm, Paul; Sioshansi, Ramteen

    2011-06-28

    Storage devices can provide several grid services, however it is challenging to quantify the value of providing several services and to optimally allocate storage resources to maximize value. We develop a co-optimized Compressed Air Energy Storage (CAES) dispatch model to characterize the value of providing operating reserves in addition to energy arbitrage in several U.S. markets. We use the model to: (1) quantify the added value of providing operating reserves in addition to energy arbitrage; (2) evaluate the dynamic nature of optimally allocating storage resources into energy and reserve markets; and (3) quantify the sensitivity of CAES net revenues tomore » several design and performance parameters. We find that conventional CAES systems could earn an additional 23 ± 10/kW-yr by providing operating reserves, and adiabatic CAES systems could earn an additional 28 ± 13/kW-yr. We find that arbitrage-only revenues are unlikely to support a CAES investment in most market locations, but the addition of reserve revenues could support a conventional CAES investment in several markets. Adiabatic CAES revenues are not likely to support an investment in most regions studied. As a result, modifying CAES design and performance parameters primarily impacts arbitrage revenues, and optimizing CAES design will be nearly independent of dispatch strategy.« less

  6. Lean and Efficient Software: Whole-Program Optimization of Executables

    DTIC Science & Technology

    2015-09-30

    libraries. Many levels of library interfaces—where some libraries are dynamically linked and some are provided in binary form only—significantly limit...software at build time. The opportunity: Our objective in this project is to substantially improve the performance, size, and robustness of binary ...executables by using static and dynamic binary program analysis techniques to perform whole-program optimization directly on compiled programs

  7. Optimal design of a main driving mechanism for servo punch press based on performance atlases

    NASA Astrophysics Data System (ADS)

    Zhou, Yanhua; Xie, Fugui; Liu, Xinjun

    2013-09-01

    The servomotor drive turret punch press is attracting more attentions and being developed more intensively due to the advantages of high speed, high accuracy, high flexibility, high productivity, low noise, cleaning and energy saving. To effectively improve the performance and lower the cost, it is necessary to develop new mechanisms and establish corresponding optimal design method with uniform performance indices. A new patented main driving mechanism and a new optimal design method are proposed. In the optimal design, the performance indices, i.e., the local motion/force transmission indices ITI, OTI, good transmission workspace good transmission workspace(GTW) and the global transmission indices GTIs are defined. The non-dimensional normalization method is used to get all feasible solutions in dimensional synthesis. Thereafter, the performance atlases, which can present all possible design solutions, are depicted. As a result, the feasible solution of the mechanism with good motion/force transmission performance is obtained. And the solution can be flexibly adjusted by designer according to the practical design requirements. The proposed mechanism is original, and the presented design method provides a feasible solution to the optimal design of the main driving mechanism for servo punch press.

  8. Cost effective simulation-based multiobjective optimization in the performance of an internal combustion engine

    NASA Astrophysics Data System (ADS)

    Aittokoski, Timo; Miettinen, Kaisa

    2008-07-01

    Solving real-life engineering problems can be difficult because they often have multiple conflicting objectives, the objective functions involved are highly nonlinear and they contain multiple local minima. Furthermore, function values are often produced via a time-consuming simulation process. These facts suggest the need for an automated optimization tool that is efficient (in terms of number of objective function evaluations) and capable of solving global and multiobjective optimization problems. In this article, the requirements on a general simulation-based optimization system are discussed and such a system is applied to optimize the performance of a two-stroke combustion engine. In the example of a simulation-based optimization problem, the dimensions and shape of the exhaust pipe of a two-stroke engine are altered, and values of three conflicting objective functions are optimized. These values are derived from power output characteristics of the engine. The optimization approach involves interactive multiobjective optimization and provides a convenient tool to balance between conflicting objectives and to find good solutions.

  9. Thin client performance for remote 3-D image display.

    PubMed

    Lai, Albert; Nieh, Jason; Laine, Andrew; Starren, Justin

    2003-01-01

    Several trends in biomedical computing are converging in a way that will require new approaches to telehealth image display. Image viewing is becoming an "anytime, anywhere" activity. In addition, organizations are beginning to recognize that healthcare providers are highly mobile and optimal care requires providing information wherever the provider and patient are. Thin-client computing is one way to support image viewing this complex environment. However little is known about the behavior of thin client systems in supporting image transfer in modern heterogeneous networks. Our results show that using thin-clients can deliver acceptable performance over conditions commonly seen in wireless networks if newer protocols optimized for these conditions are used.

  10. NASA Human Health and Performance Information Architecture Panel

    NASA Technical Reports Server (NTRS)

    Johnson-Throop, Kathy; Kadwa, Binafer; VanBaalen, Mary

    2014-01-01

    The Human Health and Performance (HH&P) Directorate at NASA's Johnson Space Center has a mission to enable optimization of human health and performance throughout all phases of spaceflight. All HH&P functions are ultimately aimed at achieving this mission. Our activities enable mission success, optimizing human health and productivity in space before, during, and after the actual spaceflight experience of our crews, and include support for ground-based functions. Many of our spaceflight innovations also provide solutions for terrestrial challenges, thereby enhancing life on Earth.

  11. Facilities | Integrated Energy Solutions | NREL

    Science.gov Websites

    strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed

  12. A stochastic optimal feedforward and feedback control methodology for superagility

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Direskeneli, Haldun; Taylor, Deborah B.

    1992-01-01

    A new control design methodology is developed: Stochastic Optimal Feedforward and Feedback Technology (SOFFT). Traditional design techniques optimize a single cost function (which expresses the design objectives) to obtain both the feedforward and feedback control laws. This approach places conflicting demands on the control law such as fast tracking versus noise atttenuation/disturbance rejection. In the SOFFT approach, two cost functions are defined. The feedforward control law is designed to optimize one cost function, the feedback optimizes the other. By separating the design objectives and decoupling the feedforward and feedback design processes, both objectives can be achieved fully. A new measure of command tracking performance, Z-plots, is also developed. By analyzing these plots at off-nominal conditions, the sensitivity or robustness of the system in tracking commands can be predicted. Z-plots provide an important tool for designing robust control systems. The Variable-Gain SOFFT methodology was used to design a flight control system for the F/A-18 aircraft. It is shown that SOFFT can be used to expand the operating regime and provide greater performance (flying/handling qualities) throughout the extended flight regime. This work was performed under the NASA SBIR program. ICS plans to market the software developed as a new module in its commercial CACSD software package: ACET.

  13. Toward Optimization of Gaze-Controlled Human-Computer Interaction: Application to Hindi Virtual Keyboard for Stroke Patients.

    PubMed

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, Kongfatt; Dutta, Ashish; Prasad, Girijesh

    2018-04-01

    Virtual keyboard applications and alternative communication devices provide new means of communication to assist disabled people. To date, virtual keyboard optimization schemes based on script-specific information, along with multimodal input access facility, are limited. In this paper, we propose a novel method for optimizing the position of the displayed items for gaze-controlled tree-based menu selection systems by considering a combination of letter frequency and command selection time. The optimized graphical user interface layout has been designed for a Hindi language virtual keyboard based on a menu wherein 10 commands provide access to type 88 different characters, along with additional text editing commands. The system can be controlled in two different modes: eye-tracking alone and eye-tracking with an access soft-switch. Five different keyboard layouts have been presented and evaluated with ten healthy participants. Furthermore, the two best performing keyboard layouts have been evaluated with eye-tracking alone on ten stroke patients. The overall performance analysis demonstrated significantly superior typing performance, high usability (87% SUS score), and low workload (NASA TLX with 17 scores) for the letter frequency and time-based organization with script specific arrangement design. This paper represents the first optimized gaze-controlled Hindi virtual keyboard, which can be extended to other languages.

  14. National Athletic Trainers' Association Position Statement: Fluid Replacement for Athletes

    PubMed Central

    Casa, Douglas J.; Armstrong, Lawrence E.; Hillman, Susan K.; Montain, Scott J.; Reiff, Ralph V.; Rich, Brent S. E.; Roberts, William O.; Stone, Jennifer A.

    2000-01-01

    Objective: To present recommendations to optimize the fluid-replacement practices of athletes. Background: Dehydration can compromise athletic performance and increase the risk of exertional heat injury. Athletes do not voluntarily drink sufficient water to prevent dehydration during physical activity. Drinking behavior can be modified by education, increasing accessibility, and optimizing palatability. However, excessive overdrinking should be avoided because it can also compromise physical performance and health. We provide practical recommendations regarding fluid replacement for athletes. Recommendations: Educate athletes regarding the risks of dehydration and overhydration on health and physical performance. Work with individual athletes to develop fluid-replacement practices that optimize hydration status before, during, and after competition. Imagesp224-a PMID:16558633

  15. Shape Optimization by Bayesian-Validated Computer-Simulation Surrogates

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.

    1997-01-01

    A nonparametric-validated, surrogate approach to optimization has been applied to the computational optimization of eddy-promoter heat exchangers and to the experimental optimization of a multielement airfoil. In addition to the baseline surrogate framework, a surrogate-Pareto framework has been applied to the two-criteria, eddy-promoter design problem. The Pareto analysis improves the predictability of the surrogate results, preserves generality, and provides a means to rapidly determine design trade-offs. Significant contributions have been made in the geometric description used for the eddy-promoter inclusions as well as to the surrogate framework itself. A level-set based, geometric description has been developed to define the shape of the eddy-promoter inclusions. The level-set technique allows for topology changes (from single-body,eddy-promoter configurations to two-body configurations) without requiring any additional logic. The continuity of the output responses for input variations that cross the boundary between topologies has been demonstrated. Input-output continuity is required for the straightforward application of surrogate techniques in which simplified, interpolative models are fitted through a construction set of data. The surrogate framework developed previously has been extended in a number of ways. First, the formulation for a general, two-output, two-performance metric problem is presented. Surrogates are constructed and validated for the outputs. The performance metrics can be functions of both outputs, as well as explicitly of the inputs, and serve to characterize the design preferences. By segregating the outputs and the performance metrics, an additional level of flexibility is provided to the designer. The validated outputs can be used in future design studies and the error estimates provided by the output validation step still apply, and require no additional appeals to the expensive analysis. Second, a candidate-based a posteriori error analysis capability has been developed which provides probabilistic error estimates on the true performance for a design randomly selected near the surrogate-predicted optimal design.

  16. Optimized PID control of depth of hypnosis in anesthesia.

    PubMed

    Padula, Fabrizio; Ionescu, Clara; Latronico, Nicola; Paltenghi, Massimiliano; Visioli, Antonio; Vivacqua, Giulio

    2017-06-01

    This paper addresses the use of proportional-integral-derivative controllers for regulating the depth of hypnosis in anesthesia by using propofol administration and the bispectral index as a controlled variable. In fact, introducing an automatic control system might provide significant benefits for the patient in reducing the risk for under- and over-dosing. In this study, the controller parameters are obtained through genetic algorithms by solving a min-max optimization problem. A set of 12 patient models representative of a large population variance is used to test controller robustness. The worst-case performance in the considered population is minimized considering two different scenarios: the induction case and the maintenance case. Our results indicate that including a gain scheduling strategy enables optimal performance for induction and maintenance phases, separately. Using a single tuning to address both tasks may results in a loss of performance up to 102% in the induction phase and up to 31% in the maintenance phase. Further on, it is shown that a suitably designed low-pass filter on the controller output can handle the trade-off between the performance and the noise effect in the control variable. Optimally tuned PID controllers provide a fast induction time with an acceptable overshoot and a satisfactory disturbance rejection performance during maintenance. These features make them a very good tool for comparison when other control algorithms are developed. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Acquisition of decision making criteria: reward rate ultimately beats accuracy.

    PubMed

    Balci, Fuat; Simen, Patrick; Niyogi, Ritwik; Saxe, Andrew; Hughes, Jessica A; Holmes, Philip; Cohen, Jonathan D

    2011-02-01

    Speed-accuracy trade-offs strongly influence the rate of reward that can be earned in many decision-making tasks. Previous reports suggest that human participants often adopt suboptimal speed-accuracy trade-offs in single session, two-alternative forced-choice tasks. We investigated whether humans acquired optimal speed-accuracy trade-offs when extensively trained with multiple signal qualities. When performance was characterized in terms of decision time and accuracy, our participants eventually performed nearly optimally in the case of higher signal qualities. Rather than adopting decision criteria that were individually optimal for each signal quality, participants adopted a single threshold that was nearly optimal for most signal qualities. However, setting a single threshold for different coherence conditions resulted in only negligible decrements in the maximum possible reward rate. Finally, we tested two hypotheses regarding the possible sources of suboptimal performance: (1) favoring accuracy over reward rate and (2) misestimating the reward rate due to timing uncertainty. Our findings provide support for both hypotheses, but also for the hypothesis that participants can learn to approach optimality. We find specifically that an accuracy bias dominates early performance, but diminishes greatly with practice. The residual discrepancy between optimal and observed performance can be explained by an adaptive response to uncertainty in time estimation.

  18. Rapid design and optimization of low-thrust rendezvous/interception trajectory for asteroid deflection missions

    NASA Astrophysics Data System (ADS)

    Li, Shuang; Zhu, Yongsheng; Wang, Yukai

    2014-02-01

    Asteroid deflection techniques are essential in order to protect the Earth from catastrophic impacts by hazardous asteroids. Rapid design and optimization of low-thrust rendezvous/interception trajectories is considered as one of the key technologies to successfully deflect potentially hazardous asteroids. In this paper, we address a general framework for the rapid design and optimization of low-thrust rendezvous/interception trajectories for future asteroid deflection missions. The design and optimization process includes three closely associated steps. Firstly, shape-based approaches and genetic algorithm (GA) are adopted to perform preliminary design, which provides a reasonable initial guess for subsequent accurate optimization. Secondly, Radau pseudospectral method is utilized to transcribe the low-thrust trajectory optimization problem into a discrete nonlinear programming (NLP) problem. Finally, sequential quadratic programming (SQP) is used to efficiently solve the nonlinear programming problem and obtain the optimal low-thrust rendezvous/interception trajectories. The rapid design and optimization algorithms developed in this paper are validated by three simulation cases with different performance indexes and boundary constraints.

  19. Global linear-irreversible principle for optimization in finite-time thermodynamics

    NASA Astrophysics Data System (ADS)

    Johal, Ramandeep S.

    2018-03-01

    There is intense effort into understanding the universal properties of finite-time models of thermal machines —at optimal performance— such as efficiency at maximum power, coefficient of performance at maximum cooling power, and other such criteria. In this letter, a global principle consistent with linear irreversible thermodynamics is proposed for the whole cycle —without considering details of irreversibilities in the individual steps of the cycle. This helps to express the total duration of the cycle as τ \\propto {\\bar{Q}^2}/{Δ_\\text{tot}S} , where \\bar{Q} models the effective heat transferred through the machine during the cycle, and Δ_ \\text{tot} S is the total entropy generated. By taking \\bar{Q} in the form of simple algebraic means (such as arithmetic and geometric means) over the heats exchanged by the reservoirs, the present approach is able to predict various standard expressions for figures of merit at optimal performance, as well as the bounds respected by them. It simplifies the optimization procedure to a one-parameter optimization, and provides a fresh perspective on the issue of universality at optimal performance, for small difference in reservoir temperatures. As an illustration, we compare the performance of a partially optimized four-step endoreversible cycle with the present approach.

  20. Optimization on the impeller of a low-specific-speed centrifugal pump for hydraulic performance improvement

    NASA Astrophysics Data System (ADS)

    Pei, Ji; Wang, Wenjie; Yuan, Shouqi; Zhang, Jinfeng

    2016-09-01

    In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0 Q d and 1.4 Q d is proposed. Three parameters, namely, the blade outlet width b 2, blade outlet angle β 2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0 Q d and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.

  1. Robust fuel- and time-optimal control of uncertain flexible space structures

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Sinha, Ravi; Sunkel, John; Cox, Ken

    1993-01-01

    The problem of computing open-loop, fuel- and time-optimal control inputs for flexible space structures in the face of modeling uncertainty is investigated. Robustified, fuel- and time-optimal pulse sequences are obtained by solving a constrained optimization problem subject to robustness constraints. It is shown that 'bang-off-bang' pulse sequences with a finite number of switchings provide a practical tradeoff among the maneuvering time, fuel consumption, and performance robustness of uncertain flexible space structures.

  2. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  3. An Optimized Control for LLC Resonant Converter with Wide Load Range

    NASA Astrophysics Data System (ADS)

    Xi, Xia; Qian, Qinsong

    2017-05-01

    This paper presents an optimized control which makes LLC resonant converters operate with a wider load range and provides good closed-loop performance. The proposed control employs two paralleled digital compensations to guarantee the good closed-loop performance in a wide load range during the steady state, an optimized trajectory control will take over to change the gate-driving signals immediately at the load transients. Finally, the proposed control has been implemented and tested on a 150W 200kHz 400V/24V LLC resonant converter and the result validates the proposed method.

  4. Sensitivity of Space Station alpha joint robust controller to structural modal parameter variations

    NASA Technical Reports Server (NTRS)

    Kumar, Renjith R.; Cooper, Paul A.; Lim, Tae W.

    1991-01-01

    The photovoltaic array sun tracking control system of Space Station Freedom is described. A synthesis procedure for determining optimized values of the design variables of the control system is developed using a constrained optimization technique. The synthesis is performed to provide a given level of stability margin, to achieve the most responsive tracking performance, and to meet other design requirements. Performance of the baseline design, which is synthesized using predicted structural characteristics, is discussed and the sensitivity of the stability margin is examined for variations of the frequencies, mode shapes and damping ratios of dominant structural modes. The design provides enough robustness to tolerate a sizeable error in the predicted modal parameters. A study was made of the sensitivity of performance indicators as the modal parameters of the dominant modes vary. The design variables are resynthesized for varying modal parameters in order to achieve the most responsive tracking performance while satisfying the design requirements. This procedure of reoptimization design parameters would be useful in improving the control system performance if accurate model data are provided.

  5. Computerized Dental Comparison: A Critical Review of Dental Coding and Ranking Algorithms Used in Victim Identification.

    PubMed

    Adams, Bradley J; Aschheim, Kenneth W

    2016-01-01

    Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.

  6. Behavioural and psychophysiological correlates of athletic performance: a test of the multi-action plan model.

    PubMed

    Bertollo, Maurizio; Bortoli, Laura; Gramaccioni, Gianfranco; Hanin, Yuri; Comani, Silvia; Robazza, Claudio

    2013-06-01

    The main purposes of the present study were to substantiate the existence of the four types of performance categories (i.e., optimal-automatic, optimal-controlled, suboptimal-controlled, and suboptimal-automatic) as hypothesised in the multi-action plan (MAP) model, and to investigate whether some specific affective, behavioural, psychophysiological, and postural trends may typify each type of performance. A 20-year-old athlete of the Italian shooting team, and a 46-year-old athlete of the Italian dart-throwing team participated in the study. Athletes were asked to identify the core components of the action and then to execute a large number of shots/flights. A 2 × 2 (optimal/suboptimal × automated/controlled) within subjects multivariate analysis of variance was performed to test the differences among the four types of performance. Findings provided preliminary evidence of psychophysiological and postural differences among four performance categories as conceptualized within the MAP model. Monitoring the entire spectrum of psychophysiological and behavioural features related to the different types of performance is important to develop and implement biofeedback and neurofeedback techniques aimed at helping athletes to identify individual zones of optimal functioning and to enhance their performance.

  7. Optimization of polymer electrolyte membrane fuel cell flow channels using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Catlin, Glenn; Advani, Suresh G.; Prasad, Ajay K.

    The design of the flow channels in PEM fuel cells directly impacts the transport of reactant gases to the electrodes and affects cell performance. This paper presents results from a study to optimize the geometry of the flow channels in a PEM fuel cell. The optimization process implements a genetic algorithm to rapidly converge on the channel geometry that provides the highest net power output from the cell. In addition, this work implements a method for the automatic generation of parameterized channel domains that are evaluated for performance using a commercial computational fluid dynamics package from ANSYS. The software package includes GAMBIT as the solid modeling and meshing software, the solver FLUENT, and a PEMFC Add-on Module capable of modeling the relevant physical and electrochemical mechanisms that describe PEM fuel cell operation. The result of the optimization process is a set of optimal channel geometry values for the single-serpentine channel configuration. The performance of the optimal geometry is contrasted with a sub-optimal one by comparing contour plots of current density, oxygen and hydrogen concentration. In addition, the role of convective bypass in bringing fresh reactant to the catalyst layer is examined in detail. The convergence to the optimal geometry is confirmed by a bracketing study which compares the performance of the best individual to those of its neighbors with adjacent parameter values.

  8. Thermodynamic Analysis of TEG-TEC Device Including Influence of Thomson Effect

    NASA Astrophysics Data System (ADS)

    Feng, Yuanli; Chen, Lingen; Meng, Fankai; Sun, Fengrui

    2018-01-01

    A thermodynamic model of a thermoelectric cooler driven by thermoelectric generator (TEG-TEC) device is established considering Thomson effect. The performance is analyzed and optimized using numerical calculation based on non-equilibrium thermodynamic theory. The influence characteristics of Thomson effect on the optimal performance and variable selection are investigated by comparing the condition with and without Thomson effect. The results show that Thomson effect degrades the performance of TEG-TEC device, it decreases the cooling capacity by 27 %, decreases the coefficient of performance (COP) by 19 %, decreases the maximum cooling temperature difference by 11 % when the ratio of thermoelectric elements number is 0.6, the cold junction temperature of thermoelectric cooler (TEC) is 285 K and the hot junction temperature of thermoelectric generator (TEG) is 450 K. Thomson effect degrades the optimal performance of TEG-TEC device, it decreases the maximum cooling capacity by 28 % and decreases the maximum COP by 28 % under the same junction temperatures. Thomson effect narrows the optimal variable range and optimal working range. In the design of the devices, limited-number thermoelectric elements should be more allocated appropriately to TEG when consider Thomson effect. The results may provide some guidelines for the design of TEG-TEC devices.

  9. Objective Lens Optimized for Wavefront Delivery, Pupil Imaging, and Pupil Ghosting

    NASA Technical Reports Server (NTRS)

    Olzcak, Gene

    2009-01-01

    An interferometer objective lens (or diverger) may be used to transform a collimated beam into a diverging or converging beam. This innovation provides an objective lens that has diffraction-limited optical performance that is optimized at two sets of conjugates: imaging to the objective focus and imaging to the pupil. The lens thus provides for simultaneous delivery of a high-quality beam and excellent pupil resolution properties.

  10. Pursuing Polymer Dielectric Interfacial Effect in Organic Transistors for Photosensing Performance Optimization.

    PubMed

    Wu, Xiaohan; Chu, Yingli; Liu, Rui; Katz, Howard E; Huang, Jia

    2017-12-01

    Polymer dielectrics in organic field-effect transistors (OFETs) are essential to provide the devices with overall flexibility, stretchability, and printability and simultaneously introduce charge interaction on the interface with organic semiconductors (OSCs). The interfacial effect between various polymer dielectrics and OSCs significantly and intricately influences device performance. However, understanding of this effect is limited because the interface is buried and the interfacial charge interaction is difficult to stimulate and characterize. Here, this challenge is overcome by utilizing illumination to stimulate the interfacial effect in various OFETs and to characterize the responses of the effect by measuring photoinduced changes of the OFETs performances. This systemic investigation reveals the mechanism of the intricate interfacial effect in detail, and mathematically explains how the photosensitive OFETs characteristics are determined by parameters including polar group of the polymer dielectric and the OSC side chain. By utilizing this mechanism, performance of organic electronics can be precisely controlled and optimized. OFETs with strong interfacial effect can also show a signal additivity caused by repeated light pulses, which is applicable for photostimulated synapse emulator. Therefore, this work enlightens a detailed understanding on the interface effect and provides novel strategies for optimizing OFET photosensory performances.

  11. Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 2: Analytic manual

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Space Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows subproblems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  12. Analysis of static and dynamic characteristic of spindle system and its structure optimization in camshaft grinding machine

    NASA Astrophysics Data System (ADS)

    Feng, Jianjun; Li, Chengzhe; Wu, Zhi

    2017-08-01

    As an important part of the valve opening and closing controller in engine, camshaft has high machining accuracy requirement in designing. Taking the high-speed camshaft grinder spindle system as the research object and the spindle system performance as the optimizing target, this paper firstly uses Solidworks to establish the three-dimensional finite element model (FEM) of spindle system, then conducts static analysis and the modal analysis by applying the established FEM in ANSYS Workbench, and finally uses the design optimization function of the ANSYS Workbench to optimize the structure parameter in the spindle system. The study results prove that the design of the spindle system fully meets the production requirements, and the performance of the optimized spindle system is promoted. Besides, this paper provides an analysis and optimization method for other grinder spindle systems.

  13. Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions.

    PubMed

    Atallah, Nabil M; El-Fadel, Mutasem; Ghanimeh, Sophia; Saikaly, Pascal; Abou-Najm, Majdi

    2014-12-01

    In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Design of shared unit-dose drug distribution network using multi-level particle swarm optimization.

    PubMed

    Chen, Linjie; Monteiro, Thibaud; Wang, Tao; Marcon, Eric

    2018-03-01

    Unit-dose drug distribution systems provide optimal choices in terms of medication security and efficiency for organizing the drug-use process in large hospitals. As small hospitals have to share such automatic systems for economic reasons, the structure of their logistic organization becomes a very sensitive issue. In the research reported here, we develop a generalized multi-level optimization method - multi-level particle swarm optimization (MLPSO) - to design a shared unit-dose drug distribution network. Structurally, the problem studied can be considered as a type of capacitated location-routing problem (CLRP) with new constraints related to specific production planning. This kind of problem implies that a multi-level optimization should be performed in order to minimize logistic operating costs. Our results show that with the proposed algorithm, a more suitable modeling framework, as well as computational time savings and better optimization performance are obtained than that reported in the literature on this subject.

  15. Parallel Aircraft Trajectory Optimization with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Gray, Justin S.; Naylor, Bret

    2016-01-01

    Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.

  16. Optimal clustering of MGs based on droop controller for improving reliability using a hybrid of harmony search and genetic algorithms.

    PubMed

    Abedini, Mohammad; Moradi, Mohammad H; Hosseinian, S M

    2016-03-01

    This paper proposes a novel method to address reliability and technical problems of microgrids (MGs) based on designing a number of self-adequate autonomous sub-MGs via adopting MGs clustering thinking. In doing so, a multi-objective optimization problem is developed where power losses reduction, voltage profile improvement and reliability enhancement are considered as the objective functions. To solve the optimization problem a hybrid algorithm, named HS-GA, is provided, based on genetic and harmony search algorithms, and a load flow method is given to model different types of DGs as droop controller. The performance of the proposed method is evaluated in two case studies. The results provide support for the performance of the proposed method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  17. The Role of Efficient XML Interchange (EXI) in Navy Wide-Area Network (WAN) Optimization

    DTIC Science & Technology

    2015-03-01

    compress, and re-encrypt data to continue providing optimization through compression; however, that capability requires careful consideration of...optimization 23 of encrypted data requires a careful analysis and comparison of performance improvements and IA vulnerabilities. It is important...Contained EXI capitalizes on multiple techniques to improve compression, and they vary depending on a set of EXI options passed to the codec

  18. Perceptual learning through optimization of attentional weighting: human versus optimal Bayesian learner

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Abbey, Craig K.; Pham, Binh T.; Shimozaki, Steven S.

    2004-01-01

    Human performance in visual detection, discrimination, identification, and search tasks typically improves with practice. Psychophysical studies suggest that perceptual learning is mediated by an enhancement in the coding of the signal, and physiological studies suggest that it might be related to the plasticity in the weighting or selection of sensory units coding task relevant information (learning through attention optimization). We propose an experimental paradigm (optimal perceptual learning paradigm) to systematically study the dynamics of perceptual learning in humans by allowing comparisons to that of an optimal Bayesian algorithm and a number of suboptimal learning models. We measured improvement in human localization (eight-alternative forced-choice with feedback) performance of a target randomly sampled from four elongated Gaussian targets with different orientations and polarities and kept as a target for a block of four trials. The results suggest that the human perceptual learning can occur within a lapse of four trials (<1 min) but that human learning is slower and incomplete with respect to the optimal algorithm (23.3% reduction in human efficiency from the 1st-to-4th learning trials). The greatest improvement in human performance, occurring from the 1st-to-2nd learning trial, was also present in the optimal observer, and, thus reflects a property inherent to the visual task and not a property particular to the human perceptual learning mechanism. One notable source of human inefficiency is that, unlike the ideal observer, human learning relies more heavily on previous decisions than on the provided feedback, resulting in no human learning on trials following a previous incorrect localization decision. Finally, the proposed theory and paradigm provide a flexible framework for future studies to evaluate the optimality of human learning of other visual cues and/or sensory modalities.

  19. A Language for Specifying Compiler Optimizations for Generic Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willcock, Jeremiah J.

    2007-01-01

    Compiler optimization is important to software performance, and modern processor architectures make optimization even more critical. However, many modern software applications use libraries providing high levels of abstraction. Such libraries often hinder effective optimization — the libraries are difficult to analyze using current compiler technology. For example, high-level libraries often use dynamic memory allocation and indirectly expressed control structures, such as iteratorbased loops. Programs using these libraries often cannot achieve an optimal level of performance. On the other hand, software libraries have also been recognized as potentially aiding in program optimization. One proposed implementation of library-based optimization is to allowmore » the library author, or a library user, to define custom analyses and optimizations. Only limited systems have been created to take advantage of this potential, however. One problem in creating a framework for defining new optimizations and analyses is how users are to specify them: implementing them by hand inside a compiler is difficult and prone to errors. Thus, a domain-specific language for librarybased compiler optimizations would be beneficial. Many optimization specification languages have appeared in the literature, but they tend to be either limited in power or unnecessarily difficult to use. Therefore, I have designed, implemented, and evaluated the Pavilion language for specifying program analyses and optimizations, designed for library authors and users. These analyses and optimizations can be based on the implementation of a particular library, its use in a specific program, or on the properties of a broad range of types, expressed through concepts. The new system is intended to provide a high level of expressiveness, even though the intended users are unlikely to be compiler experts.« less

  20. Power optimization of ultrasonic friction-modulation tactile interfaces.

    PubMed

    Wiertlewski, Michael; Colgate, J Edward

    2015-01-01

    Ultrasonic friction-modulation devices provide rich tactile sensation on flat surfaces and have the potential to restore tangibility to touchscreens. To date, their adoption into consumer electronics has been in part limited by relatively high power consumption, incompatible with the requirements of battery-powered devices. This paper introduces a method that optimizes the energy efficiency and performance of this class of devices. It considers optimal energy transfer to the impedance provided by the finger interacting with the surface. Constitutive equations are determined from the mode shape of the interface and the piezoelectric coupling of the actuator. The optimization procedure employs a lumped parameter model to simplify the treatment of the problem. Examples and an experimental study show the evolution of the optimal design as a function of the impedance of the finger.

  1. Optimization of the Upper Surface of Hypersonic Vehicle Based on CFD Analysis

    NASA Astrophysics Data System (ADS)

    Gao, T. Y.; Cui, K.; Hu, S. C.; Wang, X. P.; Yang, G. W.

    2011-09-01

    For the hypersonic vehicle, the aerodynamic performance becomes more intensive. Therefore, it is a significant event to optimize the shape of the hypersonic vehicle to achieve the project demands. It is a key technology to promote the performance of the hypersonic vehicle with the method of shape optimization. Based on the existing vehicle, the optimization to the upper surface of the Simplified hypersonic vehicle was done to obtain a shape which suits the project demand. At the cruising condition, the upper surface was parameterized with the B-Spline curve method. The incremental parametric method and the reconstruction technology of the local mesh were applied here. The whole flow field was been calculated and the aerodynamic performance of the craft were obtained by the computational fluid dynamic (CFD) technology. Then the vehicle shape was optimized to achieve the maximum lift-drag ratio at attack angle 3°, 4° and 5°. The results will provide the reference for the practical design.

  2. Optimization of startup and shutdown operation of simulated moving bed chromatographic processes.

    PubMed

    Li, Suzhou; Kawajiri, Yoshiaki; Raisch, Jörg; Seidel-Morgenstern, Andreas

    2011-06-24

    This paper presents new multistage optimal startup and shutdown strategies for simulated moving bed (SMB) chromatographic processes. The proposed concept allows to adjust transient operating conditions stage-wise, and provides capability to improve transient performance and to fulfill product quality specifications simultaneously. A specially tailored decomposition algorithm is developed to ensure computational tractability of the resulting dynamic optimization problems. By examining the transient operation of a literature separation example characterized by nonlinear competitive isotherm, the feasibility of the solution approach is demonstrated, and the performance of the conventional and multistage optimal transient regimes is evaluated systematically. The quantitative results clearly show that the optimal operating policies not only allow to significantly reduce both duration of the transient phase and desorbent consumption, but also enable on-spec production even during startup and shutdown periods. With the aid of the developed transient procedures, short-term separation campaigns with small batch sizes can be performed more flexibly and efficiently by SMB chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Simulation Modeling to Compare High-Throughput, Low-Iteration Optimization Strategies for Metabolic Engineering

    PubMed Central

    Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.

    2018-01-01

    Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690

  4. Optimization of an optically implemented on-board FDMA demultiplexer

    NASA Technical Reports Server (NTRS)

    Fargnoli, J.; Riddle, L.

    1991-01-01

    Performance of a 30 GHz frequency division multiple access (FDMA) uplink to a processing satellite is modelled for the case where the onboard demultiplexer is implemented optically. Included in the performance model are the effects of adjacent channel interference, intersymbol interference, and spurious signals associated with the optical implementation. Demultiplexer parameters are optimized to provide the minimum bit error probability at a given bandwidth efficiency when filtered QPSK modulation is employed.

  5. Design Optimization Programmable Calculators versus Campus Computers.

    ERIC Educational Resources Information Center

    Savage, Michael

    1982-01-01

    A hypothetical design optimization problem and technical information on the three design parameters are presented. Although this nested iteration problem can be solved on a computer (flow diagram provided), this article suggests that several hand held calculators can be used to perform the same design iteration. (SK)

  6. Multi-objective optimization of GENIE Earth system models.

    PubMed

    Price, Andrew R; Myerscough, Richard J; Voutchkov, Ivan I; Marsh, Robert; Cox, Simon J

    2009-07-13

    The tuning of parameters in climate models is essential to provide reliable long-term forecasts of Earth system behaviour. We apply a multi-objective optimization algorithm to the problem of parameter estimation in climate models. This optimization process involves the iterative evaluation of response surface models (RSMs), followed by the execution of multiple Earth system simulations. These computations require an infrastructure that provides high-performance computing for building and searching the RSMs and high-throughput computing for the concurrent evaluation of a large number of models. Grid computing technology is therefore essential to make this algorithm practical for members of the GENIE project.

  7. A thermal vacuum test optimization procedure

    NASA Technical Reports Server (NTRS)

    Kruger, R.; Norris, H. P.

    1979-01-01

    An analytical model was developed that can be used to establish certain parameters of a thermal vacuum environmental test program based on an optimization of program costs. This model is in the form of a computer program that interacts with a user insofar as the input of certain parameters. The program provides the user a list of pertinent information regarding an optimized test program and graphs of some of the parameters. The model is a first attempt in this area and includes numerous simplifications. The model appears useful as a general guide and provides a way for extrapolating past performance to future missions.

  8. Performance Optimization Control of ECH using Fuzzy Inference Application

    NASA Astrophysics Data System (ADS)

    Dubey, Abhay Kumar

    Electro-chemical honing (ECH) is a hybrid electrolytic precision micro-finishing technology that, by combining physico-chemical actions of electro-chemical machining and conventional honing processes, provides the controlled functional surfaces-generation and fast material removal capabilities in a single operation. Process multi-performance optimization has become vital for utilizing full potential of manufacturing processes to meet the challenging requirements being placed on the surface quality, size, tolerances and production rate of engineering components in this globally competitive scenario. This paper presents an strategy that integrates the Taguchi matrix experimental design, analysis of variances and fuzzy inference system (FIS) to formulate a robust practical multi-performance optimization methodology for complex manufacturing processes like ECH, which involve several control variables. Two methodologies one using a genetic algorithm tuning of FIS (GA-tuned FIS) and another using an adaptive network based fuzzy inference system (ANFIS) have been evaluated for a multi-performance optimization case study of ECH. The actual experimental results confirm their potential for a wide range of machining conditions employed in ECH.

  9. Procedure for minimizing the cost per watt of photovoltaic systems

    NASA Technical Reports Server (NTRS)

    Redfield, D.

    1977-01-01

    A general analytic procedure is developed that provides a quantitative method for optimizing any element or process in the fabrication of a photovoltaic energy conversion system by minimizing its impact on the cost per watt of the complete system. By determining the effective value of any power loss associated with each element of the system, this procedure furnishes the design specifications that optimize the cost-performance tradeoffs for each element. A general equation is derived that optimizes the properties of any part of the system in terms of appropriate cost and performance functions, although the power-handling components are found to have a different character from the cell and array steps. Another principal result is that a fractional performance loss occurring at any cell- or array-fabrication step produces that same fractional increase in the cost per watt of the complete array. It also follows that no element or process step can be optimized correctly by considering only its own cost and performance

  10. Co-Optimization of Fuels and Engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, John

    2016-03-24

    The Co-Optimization of Fuels and Engines (Co-Optima) initiative is a new DOE initiative focused on accelerating the introduction of affordable, scalable, and sustainable biofuels and high-efficiency, low-emission vehicle engines. The simultaneous fuels and vehicles research and development (R&D) are designed to deliver maximum energy savings, emissions reduction, and on-road vehicle performance. The initiative's integrated approach combines the previously independent areas of biofuels and combustion R&D, bringing together two DOE Office of Energy Efficiency & Renewable Energy research offices, ten national laboratories, and numerous industry and academic partners to simultaneously tackle fuel and engine research and development (R&D) to maximize energymore » savings and on-road vehicle performance while dramatically reducing transportation-related petroleum consumption and greenhouse gas (GHG) emissions. This multi-year project will provide industry with the scientific underpinnings required to move new biofuels and advanced engine systems to market faster while identifying and addressing barriers to their commercialization. This project's ambitious, first-of-its-kind approach simultaneously tackles fuel and engine innovation to co-optimize performance of both elements and provide dramatic and rapid cuts in fuel use and emissions. This presentation provides an overview of the project.« less

  11. Performance improvement of optical CDMA networks with stochastic artificial bee colony optimization technique

    NASA Astrophysics Data System (ADS)

    Panda, Satyasen

    2018-05-01

    This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.

  12. Modelling of a Solar Thermal Power Plant for Benchmarking Blackbox Optimization Solvers

    NASA Astrophysics Data System (ADS)

    Lemyre Garneau, Mathieu

    A new family of problems is provided to serve as a benchmark for blackbox optimization solvers. The problems are single or bi-objective and vary in complexity in terms of the number of variables used (from 5 to 29), the type of variables (integer, real, category), the number of constraints (from 5 to 17) and their types (binary or continuous). In order to provide problems exhibiting dynamics that reflect real engineering challenges, they are extracted from an original numerical model of a concentrated solar power (CSP) power plant with molten salt thermal storage. The model simulates the performance of the power plant by using a high level modeling of each of its main components, namely, an heliostats field, a central cavity receiver, a molten salt heat storage, a steam generator and an idealized powerblock. The heliostats field layout is determined through a simple automatic strategy that finds the best individual positions on the field by considering their respective cosine efficiency, atmospheric scattering and spillage losses as a function of the design parameters. A Monte-Carlo integral method is used to evaluate the heliostats field's optical performance throughout the day so that shadowing effects between heliostats are considered, and the results of this evaluation provide the inputs to simulate the levels and temperatures of the thermal storage. The molten salt storage inventory is used to transfer thermal energy to the powerblock, which simulates a simple Rankine cycle with a single steam turbine. Auxiliary models are used to provide additional optimization constraints on the investment cost, parasitic losses or components failure. The results of preliminary optimizations performed with the NOMAD software using default settings are provided to show the validity of the problems.

  13. Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Qing; Whaley, Richard Clint; Qasem, Apan

    This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis,more » identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.« less

  14. Influence of Structural Parameters on the Performance of Vortex Valve Variable-Thrust Solid Rocket Motor

    NASA Astrophysics Data System (ADS)

    Wei, Xianggeng; Li, Jiang; He, Guoqiang

    2017-04-01

    The vortex valve solid variable thrust motor is a new solid motor which can achieve Vehicle system trajectory optimization and motor energy management. Numerical calculation was performed to investigate the influence of vortex chamber diameter, vortex chamber shape, and vortex chamber height of the vortex valve solid variable thrust motor on modulation performance. The test results verified that the calculation results are consistent with laboratory results with a maximum error of 9.5%. The research drew the following major conclusions: the optimal modulation performance was achieved in a cylindrical vortex chamber, increasing the vortex chamber diameter improved the modulation performance of the vortex valve solid variable thrust motor, optimal modulation performance could be achieved when the height of the vortex chamber is half of the vortex chamber outlet diameter, and the hot gas control flow could result in an enhancement of modulation performance. The results can provide the basis for establishing the design method of the vortex valve solid variable thrust motor.

  15. Integration of Dakota into the NEAMS Workbench

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Lefebvre, Robert A.; Langley, Brandon R.

    2017-07-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on integrating Dakota into the NEAMS Workbench. The NEAMS Workbench, developed at Oak Ridge National Laboratory, is a new software framework that provides a graphical user interface, input file creation, parsing, validation, job execution, workflow management, and output processing for a variety of nuclear codes. Dakota is a tool developed at Sandia National Laboratories that provides a suite of uncertainty quantification and optimization algorithms. Providing Dakota within the NEAMS Workbench allows users of nuclear simulation codes to perform uncertainty and optimization studies on their nuclear codes frommore » within a common, integrated environment. Details of the integration and parsing are provided, along with an example of Dakota running a sampling study on the fuels performance code, BISON, from within the NEAMS Workbench.« less

  16. Aero/structural tailoring of engine blades (AERO/STAEBL)

    NASA Technical Reports Server (NTRS)

    Brown, K. W.

    1988-01-01

    This report describes the Aero/Structural Tailoring of Engine Blades (AERO/STAEBL) program, which is a computer code used to perform engine fan and compressor blade aero/structural numerical optimizations. These optimizations seek a blade design of minimum operating cost that satisfies realistic blade design constraints. This report documents the overall program (i.e., input, optimization procedures, approximate analyses) and also provides a detailed description of the validation test cases.

  17. Robust design optimization method for centrifugal impellers under surface roughness uncertainties due to blade fouling

    NASA Astrophysics Data System (ADS)

    Ju, Yaping; Zhang, Chuhua

    2016-03-01

    Blade fouling has been proved to be a great threat to compressor performance in operating stage. The current researches on fouling-induced performance degradations of centrifugal compressors are based mainly on simplified roughness models without taking into account the realistic factors such as spatial non-uniformity and randomness of the fouling-induced surface roughness. Moreover, little attention has been paid to the robust design optimization of centrifugal compressor impellers with considerations of blade fouling. In this paper, a multi-objective robust design optimization method is developed for centrifugal impellers under surface roughness uncertainties due to blade fouling. A three-dimensional surface roughness map is proposed to describe the nonuniformity and randomness of realistic fouling accumulations on blades. To lower computational cost in robust design optimization, the support vector regression (SVR) metamodel is combined with the Monte Carlo simulation (MCS) method to conduct the uncertainty analysis of fouled impeller performance. The analyzed results show that the critical fouled region associated with impeller performance degradations lies at the leading edge of blade tip. The SVR metamodel has been proved to be an efficient and accurate means in the detection of impeller performance variations caused by roughness uncertainties. After design optimization, the robust optimal design is found to be more efficient and less sensitive to fouling uncertainties while maintaining good impeller performance in the clean condition. This research proposes a systematic design optimization method for centrifugal compressors with considerations of blade fouling, providing a practical guidance to the design of advanced centrifugal compressors.

  18. Performance seeking control program overview

    NASA Technical Reports Server (NTRS)

    Orme, John S.

    1995-01-01

    The Performance Seeking Control (PSC) program evolved from a series of integrated propulsion-flight control research programs flown at NASA Dryden Flight Research Center (DFRC) on an F-15. The first of these was the Digital Electronic Engine Control (DEEC) program and provided digital engine controls suitable for integration. The DEEC and digital electronic flight control system of the NASA F-15 were ideally suited for integrated controls research. The Advanced Engine Control System (ADECS) program proved that integrated engine and aircraft control could improve overall system performance. The objective of the PSC program was to advance the technology for a fully integrated propulsion flight control system. Whereas ADECS provided single variable control for an average engine, PSC controlled multiple propulsion system variables while adapting to the measured engine performance. PSC was developed as a model-based, adaptive control algorithm and included four optimization modes: minimum fuel flow at constant thrust, minimum turbine temperature at constant thrust, maximum thrust, and minimum thrust. Subsonic and supersonic flight testing were conducted at NASA Dryden covering the four PSC optimization modes and over the full throttle range. Flight testing of the PSC algorithm, conducted in a series of five flight test phases, has been concluded at NASA Dryden covering all four of the PSC optimization modes. Over a three year period and five flight test phases 72 research flights were conducted. The primary objective of flight testing was to exercise each PSC optimization mode and quantify the resulting performance improvements.

  19. The Job Is the Learning Environment: Performance-Centered Learning To Support Knowledge Worker Performance.

    ERIC Educational Resources Information Center

    Dickover, Noel T.

    2002-01-01

    Explains performance-centered learning (PCL), an approach to optimize support for performance on the job by making corporate assets available to knowledge workers so they can solve actual problems. Illustrates PCL with a Web site that provides just-in-time learning, collaboration, and performance support tools to improve performance at the…

  20. Modeling of pulsed propellant reorientation

    NASA Technical Reports Server (NTRS)

    Patag, A. E.; Hochstein, J. I.; Chato, D. J.

    1989-01-01

    Optimization of the propellant reorientation process can provide increased payload capability and extend the service life of spacecraft. The use of pulsed propellant reorientation to optimize the reorientation process is proposed. The ECLIPSE code was validated for modeling the reorientation process and is used to study pulsed reorientation in small-scale and full-scale propellant tanks. A dimensional analysis of the process is performed and the resulting dimensionless groups are used to present and correlate the computational predictions for reorientation performance.

  1. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process.

    PubMed

    Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-31

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.

  2. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    PubMed Central

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  3. Optimal digital dynamical decoupling for general decoherence via Walsh modulation

    NASA Astrophysics Data System (ADS)

    Qi, Haoyu; Dowling, Jonathan P.; Viola, Lorenza

    2017-11-01

    We provide a general framework for constructing digital dynamical decoupling sequences based on Walsh modulation—applicable to arbitrary qubit decoherence scenarios. By establishing equivalence between decoupling design based on Walsh functions and on concatenated projections, we identify a family of optimal Walsh sequences, which can be exponentially more efficient, in terms of the required total pulse number, for fixed cancellation order, than known digital sequences based on concatenated design. Optimal sequences for a given cancellation order are highly non-unique—their performance depending sensitively on the control path. We provide an analytic upper bound to the achievable decoupling error and show how sequences within the optimal Walsh family can substantially outperform concatenated decoupling in principle, while respecting realistic timing constraints.

  4. The automatic neutron guide optimizer guide_bot

    NASA Astrophysics Data System (ADS)

    Bertelsen, Mads

    2017-09-01

    The guide optimization software guide_bot is introduced, the main purpose of which is to reduce the time spent programming when performing numerical optimization of neutron guides. A limited amount of information on the overall guide geometry and a figure of merit describing the desired beam is used to generate the code necessary to solve the problem. A generated McStas instrument file performs the Monte Carlo ray-tracing, which is controlled by iFit optimization scripts. The resulting optimal guide is thoroughly characterized, both in terms of brilliance transfer from an idealized source and on a more realistic source such as the ESS Butterfly moderator. Basic MATLAB knowledge is required from the user, but no experience with McStas or iFit is necessary. This paper briefly describes how guide_bot is used and some important aspects of the code. A short validation against earlier work is performed which shows the expected agreement. In addition a scan over the vertical divergence requirement, where individual guide optimizations are performed for each corresponding figure of merit, provides valuable data on the consequences of this parameter. The guide_bot software package is best suited for the start of an instrument design project as it excels at comparing a large amount of different guide alternatives for a specific set of instrument requirements, but is still applicable in later stages as constraints can be used to optimize more specific guides.

  5. Multivariable optimization of liquid rocket engines using particle swarm algorithms

    NASA Astrophysics Data System (ADS)

    Jones, Daniel Ray

    Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.

  6. An optimal control model approach to the design of compensators for simulator delay

    NASA Technical Reports Server (NTRS)

    Baron, S.; Lancraft, R.; Caglayan, A.

    1982-01-01

    The effects of display delay on pilot performance and workload and of the design of the filters to ameliorate these effects were investigated. The optimal control model for pilot/vehicle analysis was used both to determine the potential delay effects and to design the compensators. The model was applied to a simple roll tracking task and to a complex hover task. The results confirm that even small delays can degrade performance and impose a workload penalty. A time-domain compensator designed by using the optimal control model directly appears capable of providing extensive compensation for these effects even in multi-input, multi-output problems.

  7. Caffeine dosing strategies to optimize alertness during sleep loss.

    PubMed

    Vital-Lopez, Francisco G; Ramakrishnan, Sridhar; Doty, Tracy J; Balkin, Thomas J; Reifman, Jaques

    2018-05-28

    Sleep loss, which affects about one-third of the US population, can severely impair physical and neurobehavioural performance. Although caffeine, the most widely used stimulant in the world, can mitigate these effects, currently there are no tools to guide the timing and amount of caffeine consumption to optimize its benefits. In this work, we provide an optimization algorithm, suited for mobile computing platforms, to determine when and how much caffeine to consume, so as to safely maximize neurobehavioural performance at the desired time of the day, under any sleep-loss condition. The algorithm is based on our previously validated Unified Model of Performance, which predicts the effect of caffeine consumption on a psychomotor vigilance task. We assessed the algorithm by comparing the caffeine-dosing strategies (timing and amount) it identified with the dosing strategies used in four experimental studies, involving total and partial sleep loss. Through computer simulations, we showed that the algorithm yielded caffeine-dosing strategies that enhanced performance of the predicted psychomotor vigilance task by up to 64% while using the same total amount of caffeine as in the original studies. In addition, the algorithm identified strategies that resulted in equivalent performance to that in the experimental studies while reducing caffeine consumption by up to 65%. Our work provides the first quantitative caffeine optimization tool for designing effective strategies to maximize neurobehavioural performance and to avoid excessive caffeine consumption during any arbitrary sleep-loss condition. © 2018 The Authors. Journal of Sleep Research published by John Wiley & Sons Ltd on behalf of European Sleep Research Society.

  8. Singular-Arc Time-Optimal Trajectory of Aircraft in Two-Dimensional Wind Field

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2006-01-01

    This paper presents a study of a minimum time-to-climb trajectory analysis for aircraft flying in a two-dimensional altitude dependent wind field. The time optimal control problem possesses a singular control structure when the lift coefficient is taken as a control variable. A singular arc analysis is performed to obtain an optimal control solution on the singular arc. Using a time-scale separation with the flight path angle treated as a fast state, the dimensionality of the optimal control solution is reduced by eliminating the lift coefficient control. A further singular arc analysis is used to decompose the original optimal control solution into the flight path angle solution and a trajectory solution as a function of the airspeed and altitude. The optimal control solutions for the initial and final climb segments are computed using a shooting method with known starting values on the singular arc The numerical results of the shooting method show that the optimal flight path angle on the initial and final climb segments are constant. The analytical approach provides a rapid means for analyzing a time optimal trajectory for aircraft performance.

  9. New approaches to optimization in aerospace conceptual design

    NASA Technical Reports Server (NTRS)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  10. Perturbing engine performance measurements to determine optimal engine control settings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan

    Methods and systems for optimizing a performance of a vehicle engine are provided. The method includes determining an initial value for a first engine control parameter based on one or more detected operating conditions of the vehicle engine, determining a value of an engine performance variable, and artificially perturbing the determined value of the engine performance variable. The initial value for the first engine control parameter is then adjusted based on the perturbed engine performance variable causing the engine performance variable to approach a target engine performance variable. Operation of the vehicle engine is controlled based on the adjusted initialmore » value for the first engine control parameter. These acts are repeated until the engine performance variable approaches the target engine performance variable.« less

  11. Evolutionary Design of Controlled Structures

    NASA Technical Reports Server (NTRS)

    Masters, Brett P.; Crawley, Edward F.

    1997-01-01

    Basic physical concepts of structural delay and transmissibility are provided for simple rod and beam structures. Investigations show the sensitivity of these concepts to differing controlled-structures variables, and to rational system modeling effects. An evolutionary controls/structures design method is developed. The basis of the method is an accurate model formulation for dynamic compensator optimization and Genetic Algorithm based updating of sensor/actuator placement and structural attributes. One and three dimensional examples from the literature are used to validate the method. Frequency domain interpretation of these controlled structure systems provide physical insight as to how the objective is optimized and consequently what is important in the objective. Several disturbance rejection type controls-structures systems are optimized for a stellar interferometer spacecraft application. The interferometric designs include closed loop tracking optics. Designs are generated for differing structural aspect ratios, differing disturbance attributes, and differing sensor selections. Physical limitations in achieving performance are given in terms of average system transfer function gains and system phase loss. A spacecraft-like optical interferometry system is investigated experimentally over several different optimized controlled structures configurations. Configurations represent common and not-so-common approaches to mitigating pathlength errors induced by disturbances of two different spectra. Results show that an optimized controlled structure for low frequency broadband disturbances achieves modest performance gains over a mass equivalent regular structure, while an optimized structure for high frequency narrow band disturbances is four times better in terms of root-mean-square pathlength. These results are predictable given the nature of the physical system and the optimization design variables. Fundamental limits on controlled performance are discussed based on the measured and fit average system transfer function gains and system phase loss.

  12. Analytical design of an industrial two-term controller for optimal regulatory control of open-loop unstable processes under operational constraints.

    PubMed

    Tchamna, Rodrigue; Lee, Moonyong

    2018-01-01

    This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Dataflow Design Tool: User's Manual

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1996-01-01

    The Dataflow Design Tool is a software tool for selecting a multiprocessor scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. The software tool implements graph-search algorithms and analysis techniques based on the dataflow paradigm. Dataflow analyses provided by the software are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool provides performance optimization through the inclusion of artificial precedence constraints among the schedulable tasks. The user interface and tool capabilities are described. Examples are provided to demonstrate the analysis, scheduling, and optimization functions facilitated by the tool.

  14. Performance limitations of translationally symmetric nonimaging devices

    NASA Astrophysics Data System (ADS)

    Bortz, John C.; Shatz, Narkis E.; Winston, Roland

    2001-11-01

    The component of the optical direction vector along the symmetry axis is conserved for all rays propagated through a translationally symmetric optical device. This quality, referred to herein as the translational skew invariant, is analogous to the conventional skew invariant, which is conserved in rotationally symmetric optical systems. The invariance of both of these quantities is a consequence of Noether's theorem. We show how performance limits for translationally symmetric nonimaging optical devices can be derived from the distributions of the translational skew invariant for the optical source and for the target to which flux is to be transferred. Examples of computed performance limits are provided. In addition, we show that a numerically optimized non-tracking solar concentrator utilizing symmetry-breaking surface microstructure can overcome the performance limits associated with translational symmetry. The optimized design provides a 47.4% increase in efficiency and concentration relative to an ideal translationally symmetric concentrator.

  15. Optimized survey design for electrical resistivity tomography: combined optimization of measurement configuration and electrode placement

    NASA Astrophysics Data System (ADS)

    Uhlemann, Sebastian; Wilkinson, Paul B.; Maurer, Hansruedi; Wagner, Florian M.; Johnson, Timothy C.; Chambers, Jonathan E.

    2018-07-01

    Within geoelectrical imaging, the choice of measurement configurations and electrode locations is known to control the image resolution. Previous work has shown that optimized survey designs can provide a model resolution that is superior to standard survey designs. This paper demonstrates a methodology to optimize resolution within a target area, while limiting the number of required electrodes, thereby selecting optimal electrode locations. This is achieved by extending previous work on the `Compare-R' algorithm, which by calculating updates to the resolution matrix optimizes the model resolution in a target area. Here, an additional weighting factor is introduced that allows to preferentially adding measurement configurations that can be acquired on a given set of electrodes. The performance of the optimization is tested on two synthetic examples and verified with a laboratory study. The effect of the weighting factor is investigated using an acquisition layout comprising a single line of electrodes. The results show that an increasing weight decreases the area of improved resolution, but leads to a smaller number of electrode positions. Imaging results superior to a standard survey design were achieved using 56 per cent fewer electrodes. The performance was also tested on a 3-D acquisition grid, where superior resolution within a target at the base of an embankment was achieved using 22 per cent fewer electrodes than a comparable standard survey. The effect of the underlying resistivity distribution on the performance of the optimization was investigated and it was shown that even strong resistivity contrasts only have minor impact. The synthetic results were verified in a laboratory tank experiment, where notable image improvements were achieved. This work shows that optimized surveys can be designed that have a resolution superior to standard survey designs, while requiring significantly fewer electrodes. This methodology thereby provides a means for improving the efficiency of geoelectrical imaging.

  16. Optimized survey design for Electrical Resistivity Tomography: combined optimization of measurement configuration and electrode placement

    NASA Astrophysics Data System (ADS)

    Uhlemann, Sebastian; Wilkinson, Paul B.; Maurer, Hansruedi; Wagner, Florian M.; Johnson, Timothy C.; Chambers, Jonathan E.

    2018-03-01

    Within geoelectrical imaging, the choice of measurement configurations and electrode locations is known to control the image resolution. Previous work has shown that optimized survey designs can provide a model resolution that is superior to standard survey designs. This paper demonstrates a methodology to optimize resolution within a target area, while limiting the number of required electrodes, thereby selecting optimal electrode locations. This is achieved by extending previous work on the `Compare-R' algorithm, which by calculating updates to the resolution matrix optimizes the model resolution in a target area. Here, an additional weighting factor is introduced that allows to preferentially adding measurement configurations that can be acquired on a given set of electrodes. The performance of the optimization is tested on two synthetic examples and verified with a laboratory study. The effect of the weighting factor is investigated using an acquisition layout comprising a single line of electrodes. The results show that an increasing weight decreases the area of improved resolution, but leads to a smaller number of electrode positions. Imaging results superior to a standard survey design were achieved using 56 per cent fewer electrodes. The performance was also tested on a 3D acquisition grid, where superior resolution within a target at the base of an embankment was achieved using 22 per cent fewer electrodes than a comparable standard survey. The effect of the underlying resistivity distribution on the performance of the optimization was investigated and it was shown that even strong resistivity contrasts only have minor impact. The synthetic results were verified in a laboratory tank experiment, where notable image improvements were achieved. This work shows that optimized surveys can be designed that have a resolution superior to standard survey designs, while requiring significantly fewer electrodes. This methodology thereby provides a means for improving the efficiency of geoelectrical imaging.

  17. External quality assessment studies for laboratory performance of molecular and serological diagnosis of Chikungunya virus infection.

    PubMed

    Jacobsen, Sonja; Patel, Pranav; Schmidt-Chanasit, Jonas; Leparc-Goffart, Isabelle; Teichmann, Anette; Zeller, Herve; Niedrig, Matthias

    2016-03-01

    Since the re-emergence of Chikungunya virus (CHIKV) in Reunion in 2005 and the recent outbreak in the Caribbean islands with an expansion to the Americas the CHIK diagnostic became very important. We evaluate the performance of laboratories regarding molecular and serological diagnostic of CHIK worldwide. A panel of 12 samples for molecular and 13 samples for serology were provided to 60 laboratories in 40 countries for evaluating the sensitivity and specificity of molecular and serology testing. The panel for molecular diagnostic testing was analysed by 56 laboratories returning 60 data sets of results whereas the 56 and 60 data sets were returned for IgG and IgM diagnostic from the participating laboratories. Twenty-three from 60 data sets performed optimal, 7 acceptable and 30 sets of results require improvement. From 50 data sets only one laboratory shows an optimal performance for IgM detection, followed by 9 data sets with acceptable and the rest need for improvement. From 46 IgG serology data sets 20 provide an optimal, 2 an acceptable and 24 require improvement performance. The evaluation of some of the diagnostic performances allows linking the quality of results to the in-house methods or commercial assays used. The external quality assurance for CHIK diagnostics provides a good overview on the laboratory performance regarding sensitivity and specificity for the molecular and serology diagnostic required for the quick and reliable analysis of suspected CHIK patients. Nearly half of the laboratories have to improve their diagnostic profile to achieve a better performance. Copyright © 2016 Z. Published by Elsevier B.V. All rights reserved.

  18. Object-Oriented Multi-Disciplinary Design, Analysis, and Optimization Tool

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2011-01-01

    An Object-Oriented Optimization (O3) tool was developed that leverages existing tools and practices, and allows the easy integration and adoption of new state-of-the-art software. At the heart of the O3 tool is the Central Executive Module (CEM), which can integrate disparate software packages in a cross platform network environment so as to quickly perform optimization and design tasks in a cohesive, streamlined manner. This object-oriented framework can integrate the analysis codes for multiple disciplines instead of relying on one code to perform the analysis for all disciplines. The CEM was written in FORTRAN and the script commands for each performance index were submitted through the use of the FORTRAN Call System command. In this CEM, the user chooses an optimization methodology, defines objective and constraint functions from performance indices, and provides starting and side constraints for continuous as well as discrete design variables. The structural analysis modules such as computations of the structural weight, stress, deflection, buckling, and flutter and divergence speeds have been developed and incorporated into the O3 tool to build an object-oriented Multidisciplinary Design, Analysis, and Optimization (MDAO) tool.

  19. OpenMP Parallelization and Optimization of Graph-Based Machine Learning Algorithms

    DOE PAGES

    Meng, Zhaoyi; Koniges, Alice; He, Yun Helen; ...

    2016-09-21

    In this paper, we investigate the OpenMP parallelization and optimization of two novel data classification algorithms. The new algorithms are based on graph and PDE solution techniques and provide significant accuracy and performance advantages over traditional data classification algorithms in serial mode. The methods leverage the Nystrom extension to calculate eigenvalue/eigenvectors of the graph Laplacian and this is a self-contained module that can be used in conjunction with other graph-Laplacian based methods such as spectral clustering. We use performance tools to collect the hotspots and memory access of the serial codes and use OpenMP as the parallelization language to parallelizemore » the most time-consuming parts. Where possible, we also use library routines. We then optimize the OpenMP implementations and detail the performance on traditional supercomputer nodes (in our case a Cray XC30), and test the optimization steps on emerging testbed systems based on Intel’s Knights Corner and Landing processors. We show both performance improvement and strong scaling behavior. Finally, a large number of optimization techniques and analyses are necessary before the algorithm reaches almost ideal scaling.« less

  20. In-flight performance optimization for rotorcraft with redundant controls

    NASA Astrophysics Data System (ADS)

    Ozdemir, Gurbuz Taha

    A conventional helicopter has limits on performance at high speeds because of the limitations of main rotor, such as compressibility issues on advancing side or stall issues on retreating side. Auxiliary lift and thrust components have been suggested to improve performance of the helicopter substantially by reducing the loading on the main rotor. Such a configuration is called the compound rotorcraft. Rotor speed can also be varied to improve helicopter performance. In addition to improved performance, compound rotorcraft and variable RPM can provide a much larger degree of control redundancy. This additional redundancy gives the opportunity to further enhance performance and handling qualities. A flight control system is designed to perform in-flight optimization of redundant control effectors on a compound rotorcraft in order to minimize power required and extend range. This "Fly to Optimal" (FTO) control law is tested in simulation using the GENHEL model. A model of the UH-60, a compound version of the UH-60A with lifting wing and vectored thrust ducted propeller (VTDP), and a generic compound version of the UH-60A with lifting wing and propeller were developed and tested in simulation. A model following dynamic inversion controller is implemented for inner loop control of roll, pitch, yaw, heave, and rotor RPM. An outer loop controller regulates airspeed and flight path during optimization. A Golden Section search method was used to find optimal rotor RPM on a conventional helicopter, where the single redundant control effector is rotor RPM. The FTO builds off of the Adaptive Performance Optimization (APO) method of Gilyard by performing low frequency sweeps on a redundant control for a fixed wing aircraft. A method based on the APO method was used to optimize trim on a compound rotorcraft with several redundant control effectors. The controller can be used to optimize rotor RPM and compound control effectors through flight test or simulations in order to establish a schedule. The method has been expanded to search a two-dimensional control space. Simulation results demonstrate the ability to maximize range by optimizing stabilator deflection and an airspeed set point. Another set of results minimize power required in high speed flight by optimizing collective pitch and stabilator deflection. Results show that the control laws effectively hold the flight condition while the FTO method is effective at improving performance. Optimizations show there can be issues when the control laws regulating altitude push the collective control towards it limits. So a modification was made to the control law to regulate airspeed and altitude using propeller pitch and angle of attack while the collective is held fixed or used as an optimization variable. A dynamic trim limit avoidance algorithm is applied to avoid control saturation in other axes during optimization maneuvers. Range and power optimization FTO simulations are compared with comprehensive sweeps of trim solutions and FTO optimization shown to be effective and reliable in reaching an optimal when optimizing up to two redundant controls. Use of redundant controls is shown to be beneficial for improving performance. The search method takes almost 25 minutes of simulated flight for optimization to be complete. The optimization maneuver itself can sometimes drive the power required to high values, so a power limit is imposed to restrict the search to avoid conditions where power is more than5% higher than that of the initial trim state. With this modification, the time the optimization maneuver takes to complete is reduced down to 21 minutes without any significant change in the optimal power value.

  1. Performance evaluation of matrix gradient coils.

    PubMed

    Jia, Feng; Schultz, Gerrit; Testud, Frederik; Welz, Anna Masako; Weber, Hans; Littin, Sebastian; Yu, Huijun; Hennig, Jürgen; Zaitsev, Maxim

    2016-02-01

    In this paper, we present a new performance measure of a matrix coil (also known as multi-coil) from the perspective of efficient, local, non-linear encoding without explicitly considering target encoding fields. An optimization problem based on a joint optimization for the non-linear encoding fields is formulated. Based on the derived objective function, a figure of merit of a matrix coil is defined, which is a generalization of a previously known resistive figure of merit for traditional gradient coils. A cylindrical matrix coil design with a high number of elements is used to illustrate the proposed performance measure. The results are analyzed to reveal novel features of matrix coil designs, which allowed us to optimize coil parameters, such as number of coil elements. A comparison to a scaled, existing multi-coil is also provided to demonstrate the use of the proposed performance parameter. The assessment of a matrix gradient coil profits from using a single performance parameter that takes the local encoding performance of the coil into account in relation to the dissipated power.

  2. An effective and optimal quality control approach for green energy manufacturing using design of experiments framework and evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Saavedra, Juan Alejandro

    Quality Control (QC) and Quality Assurance (QA) strategies vary significantly across industries in the manufacturing sector depending on the product being built. Such strategies range from simple statistical analysis and process controls, decision-making process of reworking, repairing, or scraping defective product. This study proposes an optimal QC methodology in order to include rework stations during the manufacturing process by identifying the amount and location of these workstations. The factors that are considered to optimize these stations are cost, cycle time, reworkability and rework benefit. The goal is to minimize the cost and cycle time of the process, but increase the reworkability and rework benefit. The specific objectives of this study are: (1) to propose a cost estimation model that includes energy consumption, and (2) to propose an optimal QC methodology to identify quantity and location of rework workstations. The cost estimation model includes energy consumption as part of the product direct cost. The cost estimation model developed allows the user to calculate product direct cost as the quality sigma level of the process changes. This provides a benefit because a complete cost estimation calculation does not need to be performed every time the processes yield changes. This cost estimation model is then used for the QC strategy optimization process. In order to propose a methodology that provides an optimal QC strategy, the possible factors that affect QC were evaluated. A screening Design of Experiments (DOE) was performed on seven initial factors and identified 3 significant factors. It reflected that one response variable was not required for the optimization process. A full factorial DOE was estimated in order to verify the significant factors obtained previously. The QC strategy optimization is performed through a Genetic Algorithm (GA) which allows the evaluation of several solutions in order to obtain feasible optimal solutions. The GA evaluates possible solutions based on cost, cycle time, reworkability and rework benefit. Finally it provides several possible solutions because this is a multi-objective optimization problem. The solutions are presented as chromosomes that clearly state the amount and location of the rework stations. The user analyzes these solutions in order to select one by deciding which of the four factors considered is most important depending on the product being manufactured or the company's objective. The major contribution of this study is to provide the user with a methodology used to identify an effective and optimal QC strategy that incorporates the number and location of rework substations in order to minimize direct product cost, and cycle time, and maximize reworkability, and rework benefit.

  3. Solar thermal collectors using planar reflector

    NASA Technical Reports Server (NTRS)

    Espy, P. N.

    1978-01-01

    Specular reflectors have been used successfully with flat-plate collectors to achieve exceptionally high operating temperatures and high delivered energy per unit collector area. Optimal orientation of collectors and reflectors can result in even higher performance with an improved relationship between energy demand and supply. This paper reports on a study providing first order optimization of collector-reflector arrays in which single- and multiple-faceted reflectors in fixed or singly adjustable configurations provide delivered energy maxima in either summer or winter.

  4. Optimally stopped variational quantum algorithms

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Shabani, Alireza

    2018-04-01

    Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.

  5. Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 1: User's guide

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.

    1992-01-01

    IPOST is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence fo trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the coat function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  6. Estimation and detection information trade-off for x-ray system optimization

    NASA Astrophysics Data System (ADS)

    Cushing, Johnathan B.; Clarkson, Eric W.; Mandava, Sagar; Bilgin, Ali

    2016-05-01

    X-ray Computed Tomography (CT) systems perform complex imaging tasks to detect and estimate system parameters, such as a baggage imaging system performing threat detection and generating reconstructions. This leads to a desire to optimize both the detection and estimation performance of a system, but most metrics only focus on one of these aspects. When making design choices there is a need for a concise metric which considers both detection and estimation information parameters, and then provides the user with the collection of possible optimal outcomes. In this paper a graphical analysis of Estimation and Detection Information Trade-off (EDIT) will be explored. EDIT produces curves which allow for a decision to be made for system optimization based on design constraints and costs associated with estimation and detection. EDIT analyzes the system in the estimation information and detection information space where the user is free to pick their own method of calculating these measures. The user of EDIT can choose any desired figure of merit for detection information and estimation information then the EDIT curves will provide the collection of optimal outcomes. The paper will first look at two methods of creating EDIT curves. These curves can be calculated using a wide variety of systems and finding the optimal system by maximizing a figure of merit. EDIT could also be found as an upper bound of the information from a collection of system. These two methods allow for the user to choose a method of calculation which best fits the constraints of their actual system.

  7. Integrated aerodynamic/dynamic/structural optimization of helicopter rotor blades using multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.

    1995-01-01

    This paper describes an integrated aerodynamic/dynamic/structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general-purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of global quantities (stiffness, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic designs are performed at a global level and the structural design is carried out at a detailed level with considerable dialog and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several examples.

  8. Multilevel decomposition approach to integrated aerodynamic/dynamic/structural optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.

    1994-01-01

    This paper describes an integrated aerodynamic, dynamic, and structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of local quantities (stiffnesses, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic design is performed at a global level and the structural design is carried out at a detailed level with considerable dialogue and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several cases.

  9. Optimal cooperative control synthesis of active displays

    NASA Technical Reports Server (NTRS)

    Garg, S.; Schmidt, D. K.

    1985-01-01

    A technique is developed that is intended to provide a systematic approach to synthesizing display augmentation for optimal manual control in complex, closed-loop tasks. A cooperative control synthesis technique, previously developed to design pilot-optimal control augmentation for the plant, is extended to incorporate the simultaneous design of performance enhancing displays. The technique utilizes an optimal control model of the man in the loop. It is applied to the design of a quickening control law for a display and a simple K/s(2) plant, and then to an F-15 type aircraft in a multi-channel task. Utilizing the closed loop modeling and analysis procedures, the results from the display design algorithm are evaluated and an analytical validation is performed. Experimental validation is recommended for future efforts.

  10. A direct method for synthesizing low-order optimal feedback control laws with application to flutter suppression

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.; Newsom, J. R.; Abel, I.

    1980-01-01

    A direct method of synthesizing a low-order optimal feedback control law for a high order system is presented. A nonlinear programming algorithm is employed to search for the control law design variables that minimize a performance index defined by a weighted sum of mean square steady state responses and control inputs. The controller is shown to be equivalent to a partial state estimator. The method is applied to the problem of active flutter suppression. Numerical results are presented for a 20th order system representing an aeroelastic wind-tunnel wing model. Low-order controllers (fourth and sixth order) are compared with a full order (20th order) optimal controller and found to provide near optimal performance with adequate stability margins.

  11. Prediction of pilot-aircraft stability boundaries and performance contours

    NASA Technical Reports Server (NTRS)

    Stengel, R. F.; Broussard, J. R.

    1977-01-01

    Control-theoretic pilot models can provide important new insights regarding the stability and performance characteristics of the pilot-aircraft system. Optimal-control pilot models can be formed for a wide range of flight conditions, suggesting that the human pilot can maintain stability if he adapts his control strategy to the aircraft's changing dynamics. Of particular concern is the effect of sub-optimal pilot adaptation as an aircraft transitions from low to high angle-of-attack during rapid maneuvering, as the changes in aircraft stability and control response can be extreme. This paper examines the effects of optimal and sub-optimal effort during a typical 'high-g' maneuver, and it introduces the concept of minimum-control effort (MCE) adaptation. Limited experimental results tend to support the MCE adaptation concept.

  12. Optimal power and efficiency of quantum Stirling heat engines

    NASA Astrophysics Data System (ADS)

    Yin, Yong; Chen, Lingen; Wu, Feng

    2017-01-01

    A quantum Stirling heat engine model is established in this paper in which imperfect regeneration and heat leakage are considered. A single particle which contained in a one-dimensional infinite potential well is studied, and the system consists of countless replicas. Each particle is confined in its own potential well, whose occupation probabilities can be expressed by the thermal equilibrium Gibbs distributions. Based on the Schrödinger equation, the expressions of power output and efficiency for the engine are obtained. Effects of imperfect regeneration and heat leakage on the optimal performance are discussed. The optimal performance region and the optimal values of important parameters of the engine cycle are obtained. The results obtained can provide some guidelines for the design of a quantum Stirling heat engine.

  13. A distributed approach for optimizing cascaded classifier topologies in real-time stream mining systems.

    PubMed

    Foo, Brian; van der Schaar, Mihaela

    2010-11-01

    In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.

  14. Optimal cube-connected cube multiprocessors

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Wu, Jie

    1993-01-01

    Many CFD (computational fluid dynamics) and other scientific applications can be partitioned into subproblems. However, in general the partitioned subproblems are very large. They demand high performance computing power themselves, and the solutions of the subproblems have to be combined at each time step. The cube-connect cube (CCCube) architecture is studied. The CCCube architecture is an extended hypercube structure with each node represented as a cube. It requires fewer physical links between nodes than the hypercube, and provides the same communication support as the hypercube does on many applications. The reduced physical links can be used to enhance the bandwidth of the remaining links and, therefore, enhance the overall performance. The concept and the method to obtain optimal CCCubes, which are the CCCubes with a minimum number of links under a given total number of nodes, are proposed. The superiority of optimal CCCubes over standard hypercubes was also shown in terms of the link usage in the embedding of a binomial tree. A useful computation structure based on a semi-binomial tree for divide-and-conquer type of parallel algorithms was identified. It was shown that this structure can be implemented in optimal CCCubes without performance degradation compared with regular hypercubes. The result presented should provide a useful approach to design of scientific parallel computers.

  15. Shuttle cryogenic supply system optimization study. Volume 6: Appendixes

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The optimization of the cryogenic supply system for space shuttles is discussed. The subjects considered are: (1) auxiliary power unit parametric data, (2) propellant acquisition, (3) thermal protection and thermodynamic properties, (4) instrumentation and controls, and (5) initial component redundancy evaluations. Diagrams of the systems are provided. Graphs of the performance capabilities are included.

  16. Optimizing water permeability through the hourglass shape of aquaporins

    PubMed Central

    Gravelle, Simon; Joly, Laurent; Detcheverry, François; Ybert, Christophe; Cottin-Bizonne, Cécile; Bocquet, Lydéric

    2013-01-01

    The ubiquitous aquaporin channels are able to conduct water across cell membranes, combining the seemingly antagonist functions of a very high selectivity with a remarkable permeability. Whereas molecular details are obvious keys to perform these tasks, the overall efficiency of transport in such nanopores is also strongly limited by viscous dissipation arising at the connection between the nanoconstriction and the nearby bulk reservoirs. In this contribution, we focus on these so-called entrance effects and specifically examine whether the characteristic hourglass shape of aquaporins may arise from a geometrical optimum for such hydrodynamic dissipation. Using a combination of finite-element calculations and analytical modeling, we show that conical entrances with suitable opening angle can indeed provide a large increase of the overall channel permeability. Moreover, the optimal opening angles that maximize the permeability are found to compare well with the angles measured in a large variety of aquaporins. This suggests that the hourglass shape of aquaporins could be the result of a natural selection process toward optimal hydrodynamic transport. Finally, in a biomimetic perspective, these results provide guidelines to design artificial nanopores with optimal performances. PMID:24067650

  17. Performance analysis and optimization of high capacity pulse tube refrigerator

    NASA Astrophysics Data System (ADS)

    Ghahremani, Amir R.; Saidi, M. H.; Jahanbakhshi, R.; Roshanghalb, F.

    High capacity pulse tube refrigerator (HCPTR) is a new generation of cryocoolers tailored to provide more than 250 W of cooling power at cryogenic temperatures. The most important characteristics of HCPTR when compared to other types of pulse tube refrigerators are a powerful pressure wave generator, and an accurate design. In this paper the influence of geometrical and operating parameters on the performance of a double inlet pulse tube refrigerator (DIPTR) is studied. The model is validated with the existing experimental data. As a result of this optimization, a new configuration of HCPTR is proposed. This configuration provides 335 W at 80 K cold end temperature with a frequency of 50 Hz and COP of 0.05.

  18. A new inertia weight control strategy for particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Zhu, Xianming; Wang, Hongbo

    2018-04-01

    Particle Swarm Optimization is a member of swarm intelligence algorithms, which is inspired by the behavior of bird flocks. The inertia weight, one of the most important parameters of PSO, is crucial for PSO, for it balances the performance of exploration and exploitation of the algorithm. This paper proposes a new inertia weight control strategy and PSO with this new strategy is tested by four benchmark functions. The results shows that the new strategy provides the PSO with better performance.

  19. A Grey Wolf Optimizer for Modular Granular Neural Networks for Human Recognition

    PubMed Central

    Sánchez, Daniela; Melin, Patricia

    2017-01-01

    A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition. PMID:28894461

  20. A Grey Wolf Optimizer for Modular Granular Neural Networks for Human Recognition.

    PubMed

    Sánchez, Daniela; Melin, Patricia; Castillo, Oscar

    2017-01-01

    A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.

  1. Development of an LSI maximum-likelihood convolutional decoder for advanced forward error correction capability on the NASA 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Clark, R. T.; Mccallister, R. D.

    1982-01-01

    The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bales, Benjamin B; Barrett, Richard F

    In almost all modern scientific applications, developers achieve the greatest performance gains by tuning algorithms, communication systems, and memory access patterns, while leaving low level instruction optimizations to the compiler. Given the increasingly varied and complicated x86 architectures, the value of these optimizations is unclear, and, due to time and complexity constraints, it is difficult for many programmers to experiment with them. In this report we explore the potential gains of these 'last mile' optimization efforts on an AMD Barcelona processor, providing readers with relevant information so that they can decide whether investment in the presented optimizations is worthwhile.

  3. Combined shape and topology optimization for minimization of maximal von Mises stress

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lian, Haojie; Christiansen, Asger N.; Tortorelli, Daniel A.

    Here, this work shows that a combined shape and topology optimization method can produce optimal 2D designs with minimal stress subject to a volume constraint. The method represents the surface explicitly and discretizes the domain into a simplicial complex which adapts both structural shape and topology. By performing repeated topology and shape optimizations and adaptive mesh updates, we can minimize the maximum von Mises stress using the p-norm stress measure with p-values as high as 30, provided that the stress is calculated with sufficient accuracy.

  4. Combined shape and topology optimization for minimization of maximal von Mises stress

    DOE PAGES

    Lian, Haojie; Christiansen, Asger N.; Tortorelli, Daniel A.; ...

    2017-01-27

    Here, this work shows that a combined shape and topology optimization method can produce optimal 2D designs with minimal stress subject to a volume constraint. The method represents the surface explicitly and discretizes the domain into a simplicial complex which adapts both structural shape and topology. By performing repeated topology and shape optimizations and adaptive mesh updates, we can minimize the maximum von Mises stress using the p-norm stress measure with p-values as high as 30, provided that the stress is calculated with sufficient accuracy.

  5. Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  6. Mission and system optimization of nuclear electric propulsion vehicles for lunar and Mars missions

    NASA Technical Reports Server (NTRS)

    Gilland, James H.

    1991-01-01

    The detailed mission and system optimization of low thrust electric propulsion missions is a complex, iterative process involving interaction between orbital mechanics and system performance. Through the use of appropriate approximations, initial system optimization and analysis can be performed for a range of missions. The intent of these calculations is to provide system and mission designers with simple methods to assess system design without requiring access or detailed knowledge of numerical calculus of variations optimizations codes and methods. Approximations for the mission/system optimization of Earth orbital transfer and Mars mission have been derived. Analyses include the variation of thruster efficiency with specific impulse. Optimum specific impulse, payload fraction, and power/payload ratios are calculated. The accuracy of these methods is tested and found to be reasonable for initial scoping studies. Results of optimization for Space Exploration Initiative lunar cargo and Mars missions are presented for a range of power system and thruster options.

  7. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  8. An EGR performance evaluation and decision-making approach based on grey theory and grey entropy analysis

    PubMed Central

    2018-01-01

    Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization. PMID:29377956

  9. An EGR performance evaluation and decision-making approach based on grey theory and grey entropy analysis.

    PubMed

    Zu, Xianghuan; Yang, Chuanlei; Wang, Hechun; Wang, Yinyan

    2018-01-01

    Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization.

  10. Fog computing job scheduling optimization based on bees swarm

    NASA Astrophysics Data System (ADS)

    Bitam, Salim; Zeadally, Sherali; Mellouk, Abdelhamid

    2018-04-01

    Fog computing is a new computing architecture, composed of a set of near-user edge devices called fog nodes, which collaborate together in order to perform computational services such as running applications, storing an important amount of data, and transmitting messages. Fog computing extends cloud computing by deploying digital resources at the premise of mobile users. In this new paradigm, management and operating functions, such as job scheduling aim at providing high-performance, cost-effective services requested by mobile users and executed by fog nodes. We propose a new bio-inspired optimization approach called Bees Life Algorithm (BLA) aimed at addressing the job scheduling problem in the fog computing environment. Our proposed approach is based on the optimized distribution of a set of tasks among all the fog computing nodes. The objective is to find an optimal tradeoff between CPU execution time and allocated memory required by fog computing services established by mobile users. Our empirical performance evaluation results demonstrate that the proposal outperforms the traditional particle swarm optimization and genetic algorithm in terms of CPU execution time and allocated memory.

  11. Performance optimization of spectral amplitude coding OCDMA system using new enhanced multi diagonal code

    NASA Astrophysics Data System (ADS)

    Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf

    2016-11-01

    This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.

  12. ASTROS: A multidisciplinary automated structural design tool

    NASA Technical Reports Server (NTRS)

    Neill, D. J.

    1989-01-01

    ASTROS (Automated Structural Optimization System) is a finite-element-based multidisciplinary structural optimization procedure developed under Air Force sponsorship to perform automated preliminary structural design. The design task is the determination of the structural sizes that provide an optimal structure while satisfying numerous constraints from many disciplines. In addition to its automated design features, ASTROS provides a general transient and frequency response capability, as well as a special feature to perform a transient analysis of a vehicle subjected to a nuclear blast. The motivation for the development of a single multidisciplinary design tool is that such a tool can provide improved structural designs in less time than is currently needed. The role of such a tool is even more apparent as modern materials come into widespread use. Balancing conflicting requirements for the structure's strength and stiffness while exploiting the benefits of material anisotropy is perhaps an impossible task without assistance from an automated design tool. Finally, the use of a single tool can bring the design task into better focus among design team members, thereby improving their insight into the overall task.

  13. System, apparatus and methods to implement high-speed network analyzers

    DOEpatents

    Ezick, James; Lethin, Richard; Ros-Giralt, Jordi; Szilagyi, Peter; Wohlford, David E

    2015-11-10

    Systems, apparatus and methods for the implementation of high-speed network analyzers are provided. A set of high-level specifications is used to define the behavior of the network analyzer emitted by a compiler. An optimized inline workflow to process regular expressions is presented without sacrificing the semantic capabilities of the processing engine. An optimized packet dispatcher implements a subset of the functions implemented by the network analyzer, providing a fast and slow path workflow used to accelerate specific processing units. Such dispatcher facility can also be used as a cache of policies, wherein if a policy is found, then packet manipulations associated with the policy can be quickly performed. An optimized method of generating DFA specifications for network signatures is also presented. The method accepts several optimization criteria, such as min-max allocations or optimal allocations based on the probability of occurrence of each signature input bit.

  14. Co-Optimization of Fuels and Engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, John

    2016-04-11

    The Co-Optimization of Fuels and Engines (Co-Optima) initiative is a new DOE initiative focused on accelerating the introduction of affordable, scalable, and sustainable biofuels and high-efficiency, low-emission vehicle engines. The simultaneous fuels and vehicles research and development (R&D) are designed to deliver maximum energy savings, emissions reduction, and on-road vehicle performance. The initiative's integrated approach combines the previously independent areas of biofuels and combustion R&D, bringing together two DOE Office of Energy Efficiency & Renewable Energy research offices, ten national laboratories, and numerous industry and academic partners to simultaneously tackle fuel and engine research and development (R&D) to maximize energymore » savings and on-road vehicle performance while dramatically reducing transportation-related petroleum consumption and greenhouse gas (GHG) emissions. This multi-year project will provide industry with the scientific underpinnings required to move new biofuels and advanced engine systems to market faster while identifying and addressing barriers to their commercialization. This project's ambitious, first-of-its-kind approach simultaneously tackles fuel and engine innovation to co-optimize performance of both elements and provide dramatic and rapid cuts in fuel use and emissions. This presentation provides an overview of the initiative and reviews recent progress focused on both advanced spark-ignition and compression-ignition approaches.« less

  15. Design of multi-energy Helds coupling testing system of vertical axis wind power system

    NASA Astrophysics Data System (ADS)

    Chen, Q.; Yang, Z. X.; Li, G. S.; Song, L.; Ma, C.

    2016-08-01

    The conversion efficiency of wind energy is the focus of researches and concerns as one of the renewable energy. The present methods of enhancing the conversion efficiency are mostly improving the wind rotor structure, optimizing the generator parameters and energy storage controller and so on. Because the conversion process involves in energy conversion of multi-energy fields such as wind energy, mechanical energy and electrical energy, the coupling effect between them will influence the overall conversion efficiency. In this paper, using system integration analysis technology, a testing system based on multi-energy field coupling (MEFC) of vertical axis wind power system is proposed. When the maximum efficiency of wind rotor is satisfied, it can match to the generator function parameters according to the output performance of wind rotor. The voltage controller can transform the unstable electric power to the battery on the basis of optimizing the parameters such as charging times, charging voltage. Through the communication connection and regulation of the upper computer system (UCS), it can make the coupling parameters configure to an optimal state, and it improves the overall conversion efficiency. This method can test the whole wind turbine (WT) performance systematically and evaluate the design parameters effectively. It not only provides a testing method for system structure design and parameter optimization of wind rotor, generator and voltage controller, but also provides a new testing method for the whole performance optimization of vertical axis wind energy conversion system (WECS).

  16. Robust stochastic optimization for reservoir operation

    NASA Astrophysics Data System (ADS)

    Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin

    2015-01-01

    Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.

  17. Near Zero Energy House (NZEH) Design Optimization to Improve Life Cycle Cost Performance Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Latief, Y.; Berawi, M. A.; Koesalamwardi, A. B.; Supriadi, L. S. R.

    2018-03-01

    Near Zero Energy House (NZEH) is a housing building that provides energy efficiency by using renewable energy technologies and passive house design. Currently, the costs for NZEH are quite expensive due to the high costs of the equipment and materials for solar panel, insulation, fenestration and other renewable energy technology. Therefore, a study to obtain the optimum design of a NZEH is necessary. The aim of the optimum design is achieving an economical life cycle cost performance of the NZEH. One of the optimization methods that could be utilized is Genetic Algorithm. It provides the method to obtain the optimum design based on the combinations of NZEH variable designs. This paper discusses the study to identify the optimum design of a NZEH that provides an optimum life cycle cost performance using Genetic Algorithm. In this study, an experiment through extensive design simulations of a one-level house model was conducted. As a result, the study provide the optimum design from combinations of NZEH variable designs, which are building orientation, window to wall ratio, and glazing types that would maximize the energy generated by photovoltaic panel. Hence, the design would support an optimum life cycle cost performance of the house.

  18. Novel characterization method of impedance cardiography signals using time-frequency distributions.

    PubMed

    Escrivá Muñoz, Jesús; Pan, Y; Ge, S; Jensen, E W; Vallverdú, M

    2018-03-16

    The purpose of this document is to describe a methodology to select the most adequate time-frequency distribution (TFD) kernel for the characterization of impedance cardiography signals (ICG). The predominant ICG beat was extracted from a patient and was synthetized using time-frequency variant Fourier approximations. These synthetized signals were used to optimize several TFD kernels according to a performance maximization. The optimized kernels were tested for noise resistance on a clinical database. The resulting optimized TFD kernels are presented with their performance calculated using newly proposed methods. The procedure explained in this work showcases a new method to select an appropriate kernel for ICG signals and compares the performance of different time-frequency kernels found in the literature for the case of ICG signals. We conclude that, for ICG signals, the performance (P) of the spectrogram with either Hanning or Hamming windows (P = 0.780) and the extended modified beta distribution (P = 0.765) provided similar results, higher than the rest of analyzed kernels. Graphical abstract Flowchart for the optimization of time-frequency distribution kernels for impedance cardiography signals.

  19. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  20. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  1. Optimal service using Matlab - simulink controlled Queuing system at call centers

    NASA Astrophysics Data System (ADS)

    Balaji, N.; Siva, E. P.; Chandrasekaran, A. D.; Tamilazhagan, V.

    2018-04-01

    This paper presents graphical integrated model based academic research on telephone call centres. This paper introduces an important feature of impatient customers and abandonments in the queue system. However the modern call centre is a complex socio-technical system. Queuing theory has now become a suitable application in the telecom industry to provide better online services. Through this Matlab-simulink multi queuing structured models provide better solutions in complex situations at call centres. Service performance measures analyzed at optimal level through Simulink queuing model.

  2. A Comprehensive Review of Swarm Optimization Algorithms

    PubMed Central

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  3. Dynamic Systems Analysis for Turbine Based Aero Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.

    2016-01-01

    The aircraft engine design process seeks to optimize the overall system-level performance, weight, and cost for a given concept. Steady-state simulations and data are used to identify trade-offs that should be balanced to optimize the system in a process known as systems analysis. These systems analysis simulations and data may not adequately capture the true performance trade-offs that exist during transient operation. Dynamic systems analysis provides the capability for assessing the dynamic tradeoffs at an earlier stage of the engine design process. The dynamic systems analysis concept, developed tools, and potential benefit are presented in this paper. To provide this capability, the Tool for Turbine Engine Closed-loop Transient Analysis (TTECTrA) was developed to provide the user with an estimate of the closed-loop performance (response time) and operability (high pressure compressor surge margin) for a given engine design and set of control design requirements. TTECTrA along with engine deterioration information, can be used to develop a more generic relationship between performance and operability that can impact the engine design constraints and potentially lead to a more efficient engine.

  4. Optimization and performance of bifacial solar modules: A global perspective

    DOE PAGES

    Sun, Xingshu; Khan, Mohammad Ryyan; Deline, Chris; ...

    2018-02-06

    With the rapidly growing interest in bifacial photovoltaics (PV), a worldwide map of their potential performance can help assess and accelerate the global deployment of this emerging technology. However, the existing literature only highlights optimized bifacial PV for a few geographic locations or develops worldwide performance maps for very specific configurations, such as the vertical installation. It is still difficult to translate these location- and configuration-specific conclusions to a general optimized performance of this technology. In this paper, we present a global study and optimization of bifacial solar modules using a rigorous and comprehensive modeling framework. Our results demonstrate thatmore » with a low albedo of 0.25, the bifacial gain of ground-mounted bifacial modules is less than 10% worldwide. However, increasing the albedo to 0.5 and elevating modules 1 m above the ground can boost the bifacial gain to 30%. Moreover, we derive a set of empirical design rules, which optimize bifacial solar modules across the world and provide the groundwork for rapid assessment of the location-specific performance. We find that ground-mounted, vertical, east-west-facing bifacial modules will outperform their south-north-facing, optimally tilted counterparts by up to 15% below the latitude of 30 degrees, for an albedo of 0.5. The relative energy output is reversed in latitudes above 30 degrees. A detailed and systematic comparison with data from Asia, Africa, Europe, and North America validates the model presented in this paper.« less

  5. Optimization and performance of bifacial solar modules: A global perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Xingshu; Khan, Mohammad Ryyan; Deline, Chris

    With the rapidly growing interest in bifacial photovoltaics (PV), a worldwide map of their potential performance can help assess and accelerate the global deployment of this emerging technology. However, the existing literature only highlights optimized bifacial PV for a few geographic locations or develops worldwide performance maps for very specific configurations, such as the vertical installation. It is still difficult to translate these location- and configuration-specific conclusions to a general optimized performance of this technology. In this paper, we present a global study and optimization of bifacial solar modules using a rigorous and comprehensive modeling framework. Our results demonstrate thatmore » with a low albedo of 0.25, the bifacial gain of ground-mounted bifacial modules is less than 10% worldwide. However, increasing the albedo to 0.5 and elevating modules 1 m above the ground can boost the bifacial gain to 30%. Moreover, we derive a set of empirical design rules, which optimize bifacial solar modules across the world and provide the groundwork for rapid assessment of the location-specific performance. We find that ground-mounted, vertical, east-west-facing bifacial modules will outperform their south-north-facing, optimally tilted counterparts by up to 15% below the latitude of 30 degrees, for an albedo of 0.5. The relative energy output is reversed in latitudes above 30 degrees. A detailed and systematic comparison with data from Asia, Africa, Europe, and North America validates the model presented in this paper.« less

  6. Optimized bio-inspired stiffening design for an engine nacelle.

    PubMed

    Lazo, Neil; Vodenitcharova, Tania; Hoffman, Mark

    2015-11-04

    Structural efficiency is a common engineering goal in which an ideal solution provides a structure with optimized performance at minimized weight, with consideration of material mechanical properties, structural geometry, and manufacturability. This study aims to address this goal in developing high performance lightweight, stiff mechanical components by creating an optimized design from a biologically-inspired template. The approach is implemented on the optimization of rib stiffeners along an aircraft engine nacelle. The helical and angled arrangements of cellulose fibres in plants were chosen as the bio-inspired template. Optimization of total displacement and weight was carried out using a genetic algorithm (GA) coupled with finite element analysis. Iterations showed a gradual convergence in normalized fitness. Displacement was given higher emphasis in optimization, thus the GA optimization tended towards individual designs with weights near the mass constraint. Dominant features of the resulting designs were helical ribs with rectangular cross-sections having large height-to-width ratio. Displacement reduction was at 73% as compared to an unreinforced nacelle, and is attributed to the geometric features and layout of the stiffeners, while mass is maintained within the constraint.

  7. Fuel consumption optimization for smart hybrid electric vehicle during a car-following process

    NASA Astrophysics Data System (ADS)

    Li, Liang; Wang, Xiangyu; Song, Jian

    2017-03-01

    Hybrid electric vehicles (HEVs) provide large potential to save energy and reduce emission, and smart vehicles bring out great convenience and safety for drivers. By combining these two technologies, vehicles may achieve excellent performances in terms of dynamic, economy, environmental friendliness, safety, and comfort. Hence, a smart hybrid electric vehicle (s-HEV) is selected as a platform in this paper to study a car-following process with optimizing the fuel consumption. The whole process is a multi-objective optimal problem, whose optimal solution is not just adding an energy management strategy (EMS) to an adaptive cruise control (ACC), but a deep fusion of these two methods. The problem has more restricted conditions, optimal objectives, and system states, which may result in larger computing burden. Therefore, a novel fuel consumption optimization algorithm based on model predictive control (MPC) is proposed and some search skills are adopted in receding horizon optimization to reduce computing burden. Simulations are carried out and the results indicate that the fuel consumption of proposed method is lower than that of the ACC+EMS method on the condition of ensuring car-following performances.

  8. Conformational Space Annealing explained: A general optimization algorithm, with diverse applications

    NASA Astrophysics Data System (ADS)

    Joung, InSuk; Kim, Jong Yun; Gross, Steven P.; Joo, Keehyoung; Lee, Jooyoung

    2018-02-01

    Many problems in science and engineering can be formulated as optimization problems. One way to solve these problems is to develop tailored problem-specific approaches. As such development is challenging, an alternative is to develop good generally-applicable algorithms. Such algorithms are easy to apply, typically function robustly, and reduce development time. Here we provide a description for one such algorithm called Conformational Space Annealing (CSA) along with its python version, PyCSA. We previously applied it to many optimization problems including protein structure prediction and graph community detection. To demonstrate its utility, we have applied PyCSA to two continuous test functions, namely Ackley and Eggholder functions. In addition, in order to provide complete generality of PyCSA to any types of an objective function, we demonstrate the way PyCSA can be applied to a discrete objective function, namely a parameter optimization problem. Based on the benchmarking results of the three problems, the performance of CSA is shown to be better than or similar to the most popular optimization method, simulated annealing. For continuous objective functions, we found that, L-BFGS-B was the best performing local optimization method, while for a discrete objective function Nelder-Mead was the best. The current version of PyCSA can be run in parallel at the coarse grained level by calculating multiple independent local optimizations separately. The source code of PyCSA is available from http://lee.kias.re.kr.

  9. Optimized UDP-glucuronosyltransferase (UGT) activity assay for trout liver S9 fractions

    EPA Pesticide Factsheets

    This publication provides an optimized UGT assay for trout liver S9 fractions which can be used to perform in vitro-in vivo extrapolations of measured UGT activityThis dataset is associated with the following publication:Ladd, M., P. Fitzsimmons , and J. Nichols. Optimization of a UDP-glucuronosyltransferase assay for trout liver S9 fractions: Activity enhancement by alamethicin, a pore-forming peptide. XENOBIOTICA. Taylor & Francis, Inc., Philadelphia, PA, USA, 46(12): 1066-1075, (2016).

  10. HERCULES: A Pattern Driven Code Transformation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing

    2012-01-01

    New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss themore » design, implementation and an initial evaluation of HERCULES.« less

  11. A Robust Method to Integrate End-to-End Mission Architecture Optimization Tools

    NASA Technical Reports Server (NTRS)

    Lugo, Rafael; Litton, Daniel; Qu, Min; Shidner, Jeremy; Powell, Richard

    2016-01-01

    End-to-end mission simulations include multiple phases of flight. For example, an end-to-end Mars mission simulation may include launch from Earth, interplanetary transit to Mars and entry, descent and landing. Each phase of flight is optimized to meet specified constraints and often depend on and impact subsequent phases. The design and optimization tools and methodologies used to combine different aspects of end-to-end framework and their impact on mission planning are presented. This work focuses on a robust implementation of a Multidisciplinary Design Analysis and Optimization (MDAO) method that offers the flexibility to quickly adapt to changing mission design requirements. Different simulations tailored to the liftoff, ascent, and atmospheric entry phases of a trajectory are integrated and optimized in the MDAO program Isight, which provides the user a graphical interface to link simulation inputs and outputs. This approach provides many advantages to mission planners, as it is easily adapted to different mission scenarios and can improve the understanding of the integrated system performance within a particular mission configuration. A Mars direct entry mission using the Space Launch System (SLS) is presented as a generic end-to-end case study. For the given launch period, the SLS launch performance is traded for improved orbit geometry alignment, resulting in an optimized a net payload that is comparable to that in the SLS Mission Planner's Guide.

  12. Model-as-a-service (MaaS) using the cloud service innovation platform (CSIP)

    USDA-ARS?s Scientific Manuscript database

    Cloud infrastructures for modelling activities such as data processing, performing environmental simulations, or conducting model calibrations/optimizations provide a cost effective alternative to traditional high performance computing approaches. Cloud-based modelling examples emerged into the more...

  13. CPAC: Energy-Efficient Data Collection through Adaptive Selection of Compression Algorithms for Sensor Networks

    PubMed Central

    Lee, HyungJune; Kim, HyunSeok; Chang, Ik Joon

    2014-01-01

    We propose a technique to optimize the energy efficiency of data collection in sensor networks by exploiting a selective data compression. To achieve such an aim, we need to make optimal decisions regarding two aspects: (1) which sensor nodes should execute compression; and (2) which compression algorithm should be used by the selected sensor nodes. We formulate this problem into binary integer programs, which provide an energy-optimal solution under the given latency constraint. Our simulation results show that the optimization algorithm significantly reduces the overall network-wide energy consumption for data collection. In the environment having a stationary sink from stationary sensor nodes, the optimized data collection shows 47% energy savings compared to the state-of-the-art collection protocol (CTP). More importantly, we demonstrate that our optimized data collection provides the best performance in an intermittent network under high interference. In such networks, we found that the selective compression for frequent packet retransmissions saves up to 55% energy compared to the best known protocol. PMID:24721763

  14. Component-based integration of chemistry and optimization software.

    PubMed

    Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L

    2004-11-15

    Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.

  15. Least squares polynomial chaos expansion: A review of sampling strategies

    NASA Astrophysics Data System (ADS)

    Hadigol, Mohammad; Doostan, Alireza

    2018-04-01

    As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.

  16. Automatic threshold optimization in nonlinear energy operator based spike detection.

    PubMed

    Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M

    2016-08-01

    In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.

  17. A new logistic dynamic particle swarm optimization algorithm based on random topology.

    PubMed

    Ni, Qingjian; Deng, Jianming

    2013-01-01

    Population topology of particle swarm optimization (PSO) will directly affect the dissemination of optimal information during the evolutionary process and will have a significant impact on the performance of PSO. Classic static population topologies are usually used in PSO, such as fully connected topology, ring topology, star topology, and square topology. In this paper, the performance of PSO with the proposed random topologies is analyzed, and the relationship between population topology and the performance of PSO is also explored from the perspective of graph theory characteristics in population topologies. Further, in a relatively new PSO variant which named logistic dynamic particle optimization, an extensive simulation study is presented to discuss the effectiveness of the random topology and the design strategies of population topology. Finally, the experimental data are analyzed and discussed. And about the design and use of population topology on PSO, some useful conclusions are proposed which can provide a basis for further discussion and research.

  18. Rethinking key–value store for parallel I/O optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kougkas, Anthony; Eslami, Hassan; Sun, Xian-He

    2015-01-26

    Key-value stores are being widely used as the storage system for large-scale internet services and cloud storage systems. However, they are rarely used in HPC systems, where parallel file systems are the dominant storage solution. In this study, we examine the architecture differences and performance characteristics of parallel file systems and key-value stores. We propose using key-value stores to optimize overall Input/Output (I/O) performance, especially for workloads that parallel file systems cannot handle well, such as the cases with intense data synchronization or heavy metadata operations. We conducted experiments with several synthetic benchmarks, an I/O benchmark, and a real application.more » We modeled the performance of these two systems using collected data from our experiments, and we provide a predictive method to identify which system offers better I/O performance given a specific workload. The results show that we can optimize the I/O performance in HPC systems by utilizing key-value stores.« less

  19. Optimization study on inductive-resistive circuit for broadband piezoelectric energy harvesters

    NASA Astrophysics Data System (ADS)

    Tan, Ting; Yan, Zhimiao

    2017-03-01

    The performance of cantilever-beam piezoelectric energy harvester is usually analyzed with pure resistive circuit. The optimal performance of such a vibration-based energy harvesting system is limited by narrow bandwidth around its modified natural frequency. For broadband piezoelectric energy harvesting, series and parallel inductive-resistive circuits are introduced. The electromechanical coupled distributed parameter models for such systems under harmonic base excitations are decoupled with modified natural frequency and electrical damping to consider the coupling effect. Analytical solutions of the harvested power and tip displacement for the electromechanical decoupled model are confirmed with numerical solutions for the coupled model. The optimal performance of piezoelectric energy harvesting with inductive-resistive circuits is revealed theoretically as constant maximal power at any excitation frequency. This is achieved by the scenarios of matching the modified natural frequency with the excitation frequency and equating the electrical damping to the mechanical damping. The inductance and load resistance should be simultaneously tuned to their optimal values, which may not be applicable for very high electromechanical coupling systems when the excitation frequency is higher than their natural frequencies. With identical optimal performance, the series inductive-resistive circuit is recommended for relatively small load resistance, while the parallel inductive-resistive circuit is suggested for relatively large load resistance. This study provides a simplified optimization method for broadband piezoelectric energy harvesters with inductive-resistive circuits.

  20. Active Correction of Aperture Discontinuities-Optimized Stroke Minimization. II. Optimization for Future Missions

    NASA Astrophysics Data System (ADS)

    Mazoyer, J.; Pueyo, L.; N'Diaye, M.; Fogarty, K.; Zimmerman, N.; Soummer, R.; Shaklan, S.; Norman, C.

    2018-01-01

    High-contrast imaging and spectroscopy provide unique constraints for exoplanet formation models as well as for planetary atmosphere models. Instrumentation techniques in this field have greatly improved over the last two decades, with the development of stellar coronagraphy, in parallel with specific methods of wavefront sensing and control. Next generation space- and ground-based telescopes will enable the characterization of cold solar-system-like planets for the first time and maybe even in situ detection of bio-markers. However, the growth of primary mirror diameters, necessary for these detections, comes with an increase of their complexity (segmentation, secondary mirror features). These discontinuities in the aperture can greatly limit the performance of coronagraphic instruments. In this context, we introduced a new technique, Active Correction of Aperture Discontinuities-Optimized Stroke Minimization (ACAD-OSM), to correct for the diffractive effects of aperture discontinuities in the final image plane of a coronagraph, using deformable mirrors. In this paper, we present several tools that can be used to optimize the performance of this technique for its application to future large missions. In particular, we analyzed the influence of the deformable setup (size and separating distance) and found that there is an optimal point for this setup, optimizing the performance of the instrument in contrast and throughput while minimizing the strokes applied to the deformable mirrors. These results will help us design future coronagraphic instruments to obtain the best performance.

  1. Optimum Edging and Trimming of Hardwood Lumber

    Treesearch

    Carmen Regalado; D. Earl Kline; Philip A. Araman

    1992-01-01

    Before the adoption of an automated system for optimizing edging and trimming in hardwood mills, the performance of present manual systems must be evaluated to provide a basis for comparison. a study was made in which lumber values recovered in actual hardwood operations were compared to the output of a computer-based procedure for edging and trimming optimization. The...

  2. MADS Users' Guide

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.

    2014-01-01

    MADS (Minimization Assistant for Dynamical Systems) is a trajectory optimization code in which a user-specified performance measure is directly minimized, subject to constraints placed on a low-order discretization of user-supplied plant ordinary differential equations. This document describes the mathematical formulation of the set of trajectory optimization problems for which MADS is suitable, and describes the user interface. Usage examples are provided.

  3. Integrated Medical Model (IMM) Optimization Version 4.0 Functional Improvements

    NASA Technical Reports Server (NTRS)

    Arellano, John; Young, M.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Goodenow, D. A.; Myers, J. G.

    2016-01-01

    The IMMs ability to assess mission outcome risk levels relative to available resources provides a unique capability to provide guidance on optimal operational medical kit and vehicle resources. Post-processing optimization allows IMM to optimize essential resources to improve a specific model outcome such as maximization of the Crew Health Index (CHI), or minimization of the probability of evacuation (EVAC) or the loss of crew life (LOCL). Mass and or volume constrain the optimized resource set. The IMMs probabilistic simulation uses input data on one hundred medical conditions to simulate medical events that may occur in spaceflight, the resources required to treat those events, and the resulting impact to the mission based on specific crew and mission characteristics. Because IMM version 4.0 provides for partial treatment for medical events, IMM Optimization 4.0 scores resources at the individual resource unit increment level as opposed to the full condition-specific treatment set level, as done in version 3.0. This allows the inclusion of as many resources as possible in the event that an entire set of resources called out for treatment cannot satisfy the constraints. IMM Optimization version 4.0 adds capabilities that increase efficiency by creating multiple resource sets based on differing constraints and priorities, CHI, EVAC, or LOCL. It also provides sets of resources that improve mission-related IMM v4.0 outputs with improved performance compared to the prior optimization. The new optimization represents much improved fidelity that will improve the utility of the IMM 4.0 for decision support.

  4. Optimization and performance comparison for galloping-based piezoelectric energy harvesters with alternating-current and direct-current interface circuits

    NASA Astrophysics Data System (ADS)

    Tan, Ting; Yan, Zhimiao; Lei, Hong

    2017-07-01

    Galloping-based piezoelectric energy harvesters scavenge small-scale wind energy and convert it into electrical energy. For piezoelectric energy harvesting with the same vibrational source (galloping) but different (alternating-current (AC) and direct-current (DC)) interfaces, general analytical solutions of the electromechanical coupled distributed parameter model are proposed. Galloping is theoretically proven to appear when the linear aerodynamic negative damping overcomes the electrical damping and mechanical damping. The harvested power is demonstrated as being done by the electrical damping force. Via tuning the load resistance to its optimal value for optimal or maximal electrical damping, the harvested power of the given structure with the AC/DC interface is maximized. The optimal load resistances and the corresponding performances of such two systems are compared. The optimal electrical damping are the same but with different optimal load resistances for the systems with the AC and DC interfaces. At small wind speeds where the optimal electrical damping can be realized by only tuning the load resistance, the performances of such two energy harvesting systems, including the minimal onset speeds to galloping, maximal harvested powers and corresponding tip displacements are almost the same. Smaller maximal electrical damping with larger optimal load resistance is found for the harvester with the DC interface when compared to those for the harvester with the AC interface. At large wind speeds when the maximal electrical damping rather than the optimal electrical damping can be reached by tuning the load resistance alone, the harvester with the AC interface circuit is recommended for a higher maximal harvested power with a smaller tip displacement. This study provides a method using the general electrical damping to connect and compare the performances of piezoelectric energy harvesters with same excitation source but different interfaces.

  5. An optimization-based framework for anisotropic simplex mesh adaptation

    NASA Astrophysics Data System (ADS)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  6. Extended Analytic Device Optimization Employing Asymptotic Expansion

    NASA Technical Reports Server (NTRS)

    Mackey, Jonathan; Sehirlioglu, Alp; Dynsys, Fred

    2013-01-01

    Analytic optimization of a thermoelectric junction often introduces several simplifying assumptionsincluding constant material properties, fixed known hot and cold shoe temperatures, and thermallyinsulated leg sides. In fact all of these simplifications will have an effect on device performance,ranging from negligible to significant depending on conditions. Numerical methods, such as FiniteElement Analysis or iterative techniques, are often used to perform more detailed analysis andaccount for these simplifications. While numerical methods may stand as a suitable solution scheme,they are weak in gaining physical understanding and only serve to optimize through iterativesearching techniques. Analytic and asymptotic expansion techniques can be used to solve thegoverning system of thermoelectric differential equations with fewer or less severe assumptionsthan the classic case. Analytic methods can provide meaningful closed form solutions and generatebetter physical understanding of the conditions for when simplifying assumptions may be valid.In obtaining the analytic solutions a set of dimensionless parameters, which characterize allthermoelectric couples, is formulated and provide the limiting cases for validating assumptions.Presentation includes optimization of both classic rectangular couples as well as practically andtheoretically interesting cylindrical couples using optimization parameters physically meaningful toa cylindrical couple. Solutions incorporate the physical behavior for i) thermal resistance of hot andcold shoes, ii) variable material properties with temperature, and iii) lateral heat transfer through legsides.

  7. Improving the mixing performance of side channel type micromixers using an optimal voltage control model.

    PubMed

    Wu, Chien-Hsien; Yang, Ruey-Jen

    2006-06-01

    Electroosmotic flow in microchannels is restricted to low Reynolds number regimes. Since the inertia forces are extremely weak in such regimes, turbulent conditions do not readily develop, and hence species mixing occurs primarily as a result of diffusion. Consequently, achieving a thorough species mixing generally relies upon the use of extended mixing channels. This paper aims to improve the mixing performance of conventional side channel type micromixers by specifying the optimal driving voltages to be applied to each channel. In the proposed approach, the driving voltages are identified by constructing a simple theoretical scheme based on a 'flow-rate-ratio' model and Kirchhoff's law. The numerical and experimental results confirm that the optimal voltage control approach provides a better mixing performance than the use of a single driving voltage gradient.

  8. Optimization of PET instrumentation for brain activation studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlbom, M.; Cherry, S.R.; Hoffman, E.J.

    By performing cerebral blood flow studies with positron emission tomography (PET), and comparing blood flow images of different states of activation, functional mapping of the brain is possible. The ability of current commercial instruments to perform such studies is investigated in this work, based on a comparison of noise equivalent count (NEC) rates. Differences in the NEC performance of the different scanners in conjunction with scanner design parameters, provide insights into the importance of block design (size, dead time, crystal thickness) and overall scanner design (sensitivity and scatter fraction) for optimizing data from activation studies. The newer scanners with removablemore » septa, operating with 3-D acquisition, have much higher sensitivity, but require new methodology for optimized operation. Only by administering multiple low doses (fractionation) of the flow tracer can the high sensitivity be utilized.« less

  9. A novel model of motor learning capable of developing an optimal movement control law online from scratch.

    PubMed

    Shimansky, Yury P; Kang, Tao; He, Jiping

    2004-02-01

    A computational model of a learning system (LS) is described that acquires knowledge and skill necessary for optimal control of a multisegmental limb dynamics (controlled object or CO), starting from "knowing" only the dimensionality of the object's state space. It is based on an optimal control problem setup different from that of reinforcement learning. The LS solves the optimal control problem online while practicing the manipulation of CO. The system's functional architecture comprises several adaptive components, each of which incorporates a number of mapping functions approximated based on artificial neural nets. Besides the internal model of the CO's dynamics and adaptive controller that computes the control law, the LS includes a new type of internal model, the minimal cost (IM(mc)) of moving the controlled object between a pair of states. That internal model appears critical for the LS's capacity to develop an optimal movement trajectory. The IM(mc) interacts with the adaptive controller in a cooperative manner. The controller provides an initial approximation of an optimal control action, which is further optimized in real time based on the IM(mc). The IM(mc) in turn provides information for updating the controller. The LS's performance was tested on the task of center-out reaching to eight randomly selected targets with a 2DOF limb model. The LS reached an optimal level of performance in a few tens of trials. It also quickly adapted to movement perturbations produced by two different types of external force field. The results suggest that the proposed design of a self-optimized control system can serve as a basis for the modeling of motor learning that includes the formation and adaptive modification of the plan of a goal-directed movement.

  10. Wind data for wind driven plant. [site selection for optimal performance

    NASA Technical Reports Server (NTRS)

    Stodhart, A. H.

    1973-01-01

    Simple, averaged wind velocity data provide information on energy availability, facilitate generator site selection and enable appropriate operating ranges to be established for windpowered plants. They also provide a basis for the prediction of extreme wind speeds.

  11. Detailed design of a lattice composite fuselage structure by a mixed optimization method

    NASA Astrophysics Data System (ADS)

    Liu, D.; Lohse-Busch, H.; Toropov, V.; Hühne, C.; Armani, U.

    2016-10-01

    In this article, a procedure for designing a lattice fuselage barrel is developed. It comprises three stages: first, topology optimization of an aircraft fuselage barrel is performed with respect to weight and structural performance to obtain the conceptual design. The interpretation of the optimal result is given to demonstrate the development of this new lattice airframe concept for the fuselage barrel. Subsequently, parametric optimization of the lattice aircraft fuselage barrel is carried out using genetic algorithms on metamodels generated with genetic programming from a 101-point optimal Latin hypercube design of experiments. The optimal design is achieved in terms of weight savings subject to stability, global stiffness and strain requirements, and then verified by the fine mesh finite element simulation of the lattice fuselage barrel. Finally, a practical design of the composite skin complying with the aircraft industry lay-up rules is presented. It is concluded that the mixed optimization method, combining topology optimization with the global metamodel-based approach, allows the problem to be solved with sufficient accuracy and provides the designers with a wealth of information on the structural behaviour of the novel anisogrid composite fuselage design.

  12. Co-Optimization of Fuels & Engines (Co-Optima) Initiative: Recent Progress on Light-Duty Boosted Spark-Ignition Fuels/Engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, John

    This presentation reports recent progress on light-duty boosted spark-ignition fuels/engines being developed under the Co-Optimization of Fuels and Engines initiative (Co-Optima). Co-Optima is focused on identifying fuel properties that optimize engine performance, independent of composition, allowing the market to define the best means to blend and provide these fuels. However, in support of this, we are pursuing a systematic study of blendstocks to identify a broad range of feasible options, with the objective of identifying blendstocks that can provide target ranges of key fuel properties, identifying trade-offs on consistent and comprehensive basis, and sharing information with stakeholders.

  13. Doubling Pharmacist Coverage in the Intensive Care Unit: Impact on the Pharmacists' Clinical Activities and Team Members' Satisfaction.

    PubMed

    McDaniel, Joshua; Bass, Lynn; Pate, Toni; DeValve, Michael; Miller, Susan

    2017-09-01

    Background: National professional organizations have recognized pharmacists as essential members of the intensive care unit (ICU) team. Critical care pharmacists' clinical activities have been categorized as fundamental, desirable, and optimal, providing a structure for gauging ICU pharmacy services being provided. Objective: To determine the impact the addition of a second ICU pharmacist covering 30 adult ICU beds at a large regional medical center has on the complexity of pharmacists' interventions, the types of clinical activities performed by the pharmacists, and the ICU team members' satisfaction. Methods: A prospective mixed-method descriptive study was conducted. Pharmacists recorded their interventions and clinical activities performed. A focus group composed of randomly selected ICU team members was held to qualitatively describe the impact of the additional pharmacist coverage on patient care, team dynamics, and pharmacy services provided. Results: The baseline period consisted of 33 days, and the intervention period consisted of 20 days. The average complexity of interventions was 1.72 during the baseline period (mode = 2) versus 1.69 (mode = 2) during the intervention period. The number of desirable and optimal clinical activities performed daily increased during the intervention from 8.4 (n = 279) to 16.4 (n = 328) and 2.3 (n = 75) to 8.6 (n = 171) compared with the baseline, respectively. Focus group members qualitatively described additional pharmacist coverage as beneficial. Conclusion: The additional critical care pharmacist did not increase pharmacy intervention complexity; however, more interventions were performed per day. Additional pharmacist coverage increased the daily number of desirable and optimal clinical activities performed and positively impacted ICU team members' satisfaction.

  14. Optimal Design of Cable-Driven Manipulators Using Particle Swarm Optimization.

    PubMed

    Bryson, Joshua T; Jin, Xin; Agrawal, Sunil K

    2016-08-01

    The design of cable-driven manipulators is complicated by the unidirectional nature of the cables, which results in extra actuators and limited workspaces. Furthermore, the particular arrangement of the cables and the geometry of the robot pose have a significant effect on the cable tension required to effect a desired joint torque. For a sufficiently complex robot, the identification of a satisfactory cable architecture can be difficult and can result in multiply redundant actuators and performance limitations based on workspace size and cable tensions. This work leverages previous research into the workspace analysis of cable systems combined with stochastic optimization to develop a generalized methodology for designing optimized cable routings for a given robot and desired task. A cable-driven robot leg performing a walking-gait motion is used as a motivating example to illustrate the methodology application. The components of the methodology are described, and the process is applied to the example problem. An optimal cable routing is identified, which provides the necessary controllable workspace to perform the desired task and enables the robot to perform that task with minimal cable tensions. A robot leg is constructed according to this routing and used to validate the theoretical model and to demonstrate the effectiveness of the resulting cable architecture.

  15. Numerical Modeling and Optimization of Warm-water Heat Sinks

    NASA Astrophysics Data System (ADS)

    Hadad, Yaser; Chiarot, Paul

    2015-11-01

    For cooling in large data-centers and supercomputers, water is increasingly replacing air as the working fluid in heat sinks. Utilizing water provides unique capabilities; for example: higher heat capacity, Prandtl number, and convection heat transfer coefficient. The use of warm, rather than chilled, water has the potential to provide increased energy efficiency. The geometric and operating parameters of the heat sink govern its performance. Numerical modeling is used to examine the influence of geometry and operating conditions on key metrics such as thermal and flow resistance. This model also facilitates studies on cooling of electronic chip hot spots and failure scenarios. We report on the optimal parameters for a warm-water heat sink to achieve maximum cooling performance.

  16. Village power options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lilienthal, P.

    1997-12-01

    This paper describes three different computer codes which have been written to model village power applications. The reasons which have driven the development of these codes include: the existance of limited field data; diverse applications can be modeled; models allow cost and performance comparisons; simulations generate insights into cost structures. The models which are discussed are: Hybrid2, a public code which provides detailed engineering simulations to analyze the performance of a particular configuration; HOMER - the hybrid optimization model for electric renewables - which provides economic screening for sensitivity analyses; and VIPOR the village power model - which is amore » network optimization model for comparing mini-grids to individual systems. Examples of the output of these codes are presented for specific applications.« less

  17. DYNAMIC NEUROMUSCULAR STABILIZATION & SPORTS REHABILITATION

    PubMed Central

    Kobesova, Alena; Kolar, Pavel

    2013-01-01

    Dynamic neuromuscular (core) stability is necessary for optimal athletic performance and is not achieved purely by adequate strength of abdominals, spinal extensors, gluteals or any other musculature; rather, core stabilization is accomplished through precise coordination of these muscles and intra‐abdominal pressure regulation by the central nervous system. Understanding developmental kinesiology provides a framework to appreciate the regional interdependence and the inter‐linking of the skeleton, joints, musculature during movement and the importance of training both the dynamic and stabilizing function of muscles in the kinetic chain. The Dynamic Neuromuscular Stabilization (DNS) approach provides functional tools to assess and activate the intrinsic spinal stabilizers in order to optimize the movement system for both pre‐habilitation and rehabilitation of athletic injuries and performance. Level of Evidence: 5 PMID:23439921

  18. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    PubMed

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  19. Joint optimization: Merging a new culture with a new physical environment.

    PubMed

    Stichler, Jaynelle F; Ecoff, Laurie

    2009-04-01

    Nearly $200 billion of healthcare construction is expected by the year 2015, and nurse leaders must expand their knowledge and capabilities in healthcare design. This bimonthly department prepares nurse leaders to use the evidence-based design process to ensure that new, expanded, and renovated hospitals facilitate optimal patient outcomes, enhance the work environment for healthcare providers, and improve organizational performance. In this article, the authors discuss the concept of joint optimization of merging organizational culture with a new hospital facility.

  20. Numerical and experimental analysis of a ducted propeller designed by a fully automated optimization process under open water condition

    NASA Astrophysics Data System (ADS)

    Yu, Long; Druckenbrod, Markus; Greve, Martin; Wang, Ke-qi; Abdel-Maksoud, Moustafa

    2015-10-01

    A fully automated optimization process is provided for the design of ducted propellers under open water conditions, including 3D geometry modeling, meshing, optimization algorithm and CFD analysis techniques. The developed process allows the direct integration of a RANSE solver in the design stage. A practical ducted propeller design case study is carried out for validation. Numerical simulations and open water tests are fulfilled and proved that the optimum ducted propeller improves hydrodynamic performance as predicted.

  1. Flight Planning

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Seagull Technology, Inc., Sunnyvale, CA, produced a computer program under a Langley Research Center Small Business Innovation Research (SBIR) grant called STAFPLAN (Seagull Technology Advanced Flight Plan) that plans optimal trajectory routes for small to medium sized airlines to minimize direct operating costs while complying with various airline operating constraints. STAFPLAN incorporates four input databases, weather, route data, aircraft performance, and flight-specific information (times, payload, crew, fuel cost) to provide the correct amount of fuel optimal cruise altitude, climb and descent points, optimal cruise speed, and flight path.

  2. Optimization Strategies for Sensor and Actuator Placement

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Kincaid, Rex K.

    1999-01-01

    This paper provides a survey of actuator and sensor placement problems from a wide range of engineering disciplines and a variety of applications. Combinatorial optimization methods are recommended as a means for identifying sets of actuators and sensors that maximize performance. Several sample applications from NASA Langley Research Center, such as active structural acoustic control, are covered in detail. Laboratory and flight tests of these applications indicate that actuator and sensor placement methods are effective and important. Lessons learned in solving these optimization problems can guide future research.

  3. Stochastic optimization algorithms for barrier dividend strategies

    NASA Astrophysics Data System (ADS)

    Yin, G.; Song, Q. S.; Yang, H.

    2009-01-01

    This work focuses on finding optimal barrier policy for an insurance risk model when the dividends are paid to the share holders according to a barrier strategy. A new approach based on stochastic optimization methods is developed. Compared with the existing results in the literature, more general surplus processes are considered. Precise models of the surplus need not be known; only noise-corrupted observations of the dividends are used. Using barrier-type strategies, a class of stochastic optimization algorithms are developed. Convergence of the algorithm is analyzed; rate of convergence is also provided. Numerical results are reported to demonstrate the performance of the algorithm.

  4. Optimization Techniques for 3D Graphics Deployment on Mobile Devices

    NASA Astrophysics Data System (ADS)

    Koskela, Timo; Vatjus-Anttila, Jarkko

    2015-03-01

    3D Internet technologies are becoming essential enablers in many application areas including games, education, collaboration, navigation and social networking. The use of 3D Internet applications with mobile devices provides location-independent access and richer use context, but also performance issues. Therefore, one of the important challenges facing 3D Internet applications is the deployment of 3D graphics on mobile devices. In this article, we present an extensive survey on optimization techniques for 3D graphics deployment on mobile devices and qualitatively analyze the applicability of each technique from the standpoints of visual quality, performance and energy consumption. The analysis focuses on optimization techniques related to data-driven 3D graphics deployment, because it supports off-line use, multi-user interaction, user-created 3D graphics and creation of arbitrary 3D graphics. The outcome of the analysis facilitates the development and deployment of 3D Internet applications on mobile devices and provides guidelines for future research.

  5. Performance Analysis of Fuzzy-PID Controller for Blood Glucose Regulation in Type-1 Diabetic Patients.

    PubMed

    Yadav, Jyoti; Rani, Asha; Singh, Vijander

    2016-12-01

    This paper presents Fuzzy-PID (FPID) control scheme for a blood glucose control of type 1 diabetic subjects. A new metaheuristic Cuckoo Search Algorithm (CSA) is utilized to optimize the gains of FPID controller. CSA provides fast convergence and is capable of handling global optimization of continuous nonlinear systems. The proposed controller is an amalgamation of fuzzy logic and optimization which may provide an efficient solution for complex problems like blood glucose control. The task is to maintain normal glucose levels in the shortest possible time with minimum insulin dose. The glucose control is achieved by tuning the PID (Proportional Integral Derivative) and FPID controller with the help of Genetic Algorithm and CSA for comparative analysis. The designed controllers are tested on Bergman minimal model to control the blood glucose level in the facets of parameter uncertainties, meal disturbances and sensor noise. The results reveal that the performance of CSA-FPID controller is superior as compared to other designed controllers.

  6. Multi-objective optimal design of sandwich panels using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Xiaomei; Jiang, Yiping; Pueh Lee, Heow

    2017-10-01

    In this study, an optimization problem concerning sandwich panels is investigated by simultaneously considering the two objectives of minimizing the panel mass and maximizing the sound insulation performance. First of all, the acoustic model of sandwich panels is discussed, which provides a foundation to model the acoustic objective function. Then the optimization problem is formulated as a bi-objective programming model, and a solution algorithm based on the non-dominated sorting genetic algorithm II (NSGA-II) is provided to solve the proposed model. Finally, taking an example of a sandwich panel that is expected to be used as an automotive roof panel, numerical experiments are carried out to verify the effectiveness of the proposed model and solution algorithm. Numerical results demonstrate in detail how the core material, geometric constraints and mechanical constraints impact the optimal designs of sandwich panels.

  7. An Effective Mechanism for Virtual Machine Placement using Aco in IAAS Cloud

    NASA Astrophysics Data System (ADS)

    Shenbaga Moorthy, Rajalakshmi; Fareentaj, U.; Divya, T. K.

    2017-08-01

    Cloud computing provides an effective way to dynamically provide numerous resources to meet customer demands. A major challenging problem for cloud providers is designing efficient mechanisms for optimal virtual machine Placement (OVMP). Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. In order to provide appropriate resources to the clients an optimal virtual machine placement algorithm is proposed. Virtual machine placement is NP-Hard problem. Such NP-Hard problem can be solved using heuristic algorithm. In this paper, Ant Colony Optimization based virtual machine placement is proposed. Our proposed system focuses on minimizing the cost spending in each plan for hosting virtual machines in a multiple cloud provider environment and the response time of each cloud provider is monitored periodically, in such a way to minimize delay in providing the resources to the users. The performance of the proposed algorithm is compared with greedy mechanism. The proposed algorithm is simulated in Eclipse IDE. The results clearly show that the proposed algorithm minimizes the cost, response time and also number of migrations.

  8. Magic in the machine: a computational magician's assistant.

    PubMed

    Williams, Howard; McOwan, Peter W

    2014-01-01

    A human magician blends science, psychology, and performance to create a magical effect. In this paper we explore what can be achieved when that human intelligence is replaced or assisted by machine intelligence. Magical effects are all in some form based on hidden mathematical, scientific, or psychological principles; often the parameters controlling these underpinning techniques are hard for a magician to blend to maximize the magical effect required. The complexity is often caused by interacting and often conflicting physical and psychological constraints that need to be optimally balanced. Normally this tuning is done by trial and error, combined with human intuitions. Here we focus on applying Artificial Intelligence methods to the creation and optimization of magic tricks exploiting mathematical principles. We use experimentally derived data about particular perceptual and cognitive features, combined with a model of the underlying mathematical process to provide a psychologically valid metric to allow optimization of magical impact. In the paper we introduce our optimization methodology and describe how it can be flexibly applied to a range of different types of mathematics based tricks. We also provide two case studies as exemplars of the methodology at work: a magical jigsaw, and a mind reading card trick effect. We evaluate each trick created through testing in laboratory and public performances, and further demonstrate the real world efficacy of our approach for professional performers through sales of the tricks in a reputable magic shop in London.

  9. Data centers as dispatchable loads to harness stranded power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kibaek; Yang, Fan; Zavala, Victor M.

    Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less

  10. Data centers as dispatchable loads to harness stranded power

    DOE PAGES

    Kim, Kibaek; Yang, Fan; Zavala, Victor M.; ...

    2016-07-20

    Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less

  11. Aero-thermal optimization of film cooling flow parameters on the suction surface of a high pressure turbine blade

    NASA Astrophysics Data System (ADS)

    El Ayoubi, Carole; Hassan, Ibrahim; Ghaly, Wahid

    2012-11-01

    This paper aims to optimize film coolant flow parameters on the suction surface of a high-pressure gas turbine blade in order to obtain an optimum compromise between a superior cooling performance and a minimum aerodynamic penalty. An optimization algorithm coupled with three-dimensional Reynolds-averaged Navier Stokes analysis is used to determine the optimum film cooling configuration. The VKI blade with two staggered rows of axially oriented, conically flared, film cooling holes on its suction surface is considered. Two design variables are selected; the coolant to mainstream temperature ratio and total pressure ratio. The optimization objective consists of maximizing the spatially averaged film cooling effectiveness and minimizing the aerodynamic penalty produced by film cooling. The effect of varying the coolant flow parameters on the film cooling effectiveness and the aerodynamic loss is analyzed using an optimization method and three dimensional steady CFD simulations. The optimization process consists of a genetic algorithm and a response surface approximation of the artificial neural network type to provide low-fidelity predictions of the objective function. The CFD simulations are performed using the commercial software CFX. The numerical predictions of the aero-thermal performance is validated against a well-established experimental database.

  12. Optimal Control-Based Adaptive NN Design for a Class of Nonlinear Discrete-Time Block-Triangular Systems.

    PubMed

    Liu, Yan-Jun; Tong, Shaocheng

    2016-11-01

    In this paper, we propose an optimal control scheme-based adaptive neural network design for a class of unknown nonlinear discrete-time systems. The controlled systems are in a block-triangular multi-input-multi-output pure-feedback structure, i.e., there are both state and input couplings and nonaffine functions to be included in every equation of each subsystem. The design objective is to provide a control scheme, which not only guarantees the stability of the systems, but also achieves optimal control performance. The main contribution of this paper is that it is for the first time to achieve the optimal performance for such a class of systems. Owing to the interactions among subsystems, making an optimal control signal is a difficult task. The design ideas are that: 1) the systems are transformed into an output predictor form; 2) for the output predictor, the ideal control signal and the strategic utility function can be approximated by using an action network and a critic network, respectively; and 3) an optimal control signal is constructed with the weight update rules to be designed based on a gradient descent method. The stability of the systems can be proved based on the difference Lyapunov method. Finally, a numerical simulation is given to illustrate the performance of the proposed scheme.

  13. Particle swarm optimization: an alternative in marine propeller optimization?

    NASA Astrophysics Data System (ADS)

    Vesting, F.; Bensow, R. E.

    2018-01-01

    This article deals with improving and evaluating the performance of two evolutionary algorithm approaches for automated engineering design optimization. Here a marine propeller design with constraints on cavitation nuisance is the intended application. For this purpose, the particle swarm optimization (PSO) algorithm is adapted for multi-objective optimization and constraint handling for use in propeller design. Three PSO algorithms are developed and tested for the optimization of four commercial propeller designs for different ship types. The results are evaluated by interrogating the generation medians and the Pareto front development. The same propellers are also optimized utilizing the well established NSGA-II genetic algorithm to provide benchmark results. The authors' PSO algorithms deliver comparable results to NSGA-II, but converge earlier and enhance the solution in terms of constraints violation.

  14. An optimized computational method for determining the beta dose distribution using a multiple-element thermoluminescent dosimeter system.

    PubMed

    Shen, L; Levine, S H; Catchen, G L

    1987-07-01

    This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration.

  15. Detection of fatigue cracks by nondestructive testing methods

    NASA Technical Reports Server (NTRS)

    Anderson, R. T.; Delacy, T. J.; Stewart, R. C.

    1973-01-01

    The effectiveness was assessed of various NDT methods to detect small tight cracks by randomly introducing fatigue cracks into aluminum sheets. The study included optimizing NDT methods calibrating NDT equipment with fatigue cracked standards, and evaluating a number of cracked specimens by the optimized NDT methods. The evaluations were conducted by highly trained personnel, provided with detailed procedures, in order to minimize the effects of human variability. These personnel performed the NDT on the test specimens without knowledge of the flaw locations and reported on the flaws detected. The performance of these tests was measured by comparing the flaws detected against the flaws present. The principal NDT methods utilized were radiographic, ultrasonic, penetrant, and eddy current. Holographic interferometry, acoustic emission monitoring, and replication methods were also applied on a reduced number of specimens. Generally, the best performance was shown by eddy current, ultrasonic, penetrant and holographic tests. Etching provided no measurable improvement, while proof loading improved flaw detectability. Data are shown that quantify the performances of the NDT methods applied.

  16. Optimum Design of High Speed Prop-Rotors

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi

    1992-01-01

    The objective of this research is to develop optimization procedures to provide design trends in high speed prop-rotors. The necessary disciplinary couplings are all considered within a closed loop optimization process. The procedures involve the consideration of blade aeroelastic, aerodynamic performance, structural and dynamic design requirements. Further, since the design involves consideration of several different objectives, multiobjective function formulation techniques are developed.

  17. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  18. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  19. Optimal coupling and feasibility of a solar-powered year-round ejector air conditioner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokolov, M.; Hershgal, D.

    1993-06-01

    An ejector refrigeration system that uses a conventional refrigerant (R-114) is introduced as a possible mechanism for providing solar-based air-conditioning. Optimal coupling conditions between the collectors' energy output and energy requirements of the cooling system, are investigated. Operation at such optimal conditions assures maximized overall efficiency. Procedures leading to the evaluation of the performance of a real system are disclosed. Design curves for such a system with R-114 as refrigerant are provided. A multi-ejectors arrangement that provides an efficient adjustment for variations of ambient conditions, is described. Year-round air-conditioning is facilitated by rerouting the refrigerant flow through a heating modemore » of the system. Calculations are carried out for illustrative configurations in which relatively low condensing temperature (water reservoirs, cooling towers, or moderate climate) can be maintained.« less

  20. Unraveling Quantum Annealers using Classical Hardness

    PubMed Central

    Martin-Mayor, Victor; Hen, Itay

    2015-01-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257

  1. Piezoresistive Composite Silicon Dioxide Nanocantilever Surface Stress Sensor: Design and Optimization.

    PubMed

    Mathew, Ribu; Sankar, A Ravi

    2018-05-01

    In this paper, we present the design and optimization of a rectangular piezoresistive composite silicon dioxide nanocantilever sensor. Unlike the conventional design approach, we perform the sensor optimization by not only considering its electro-mechanical response but also incorporating the impact of self-heating induced thermal drift in its terminal characteristics. Through extensive simulations first we comprehend and quantify the inaccuracies due to self-heating effect induced by the geometrical and intrinsic parameters of the piezoresistor. Then, by optimizing the ratio of electrical sensitivity to thermal sensitivity defined as the sensitivity ratio (υ) we improve the sensor performance and measurement reliability. Results show that to ensure υ ≥ 1, shorter and wider piezoresistors are better. In addition, it is observed that unlike the general belief that high doping concentration of piezoresistor reduces thermal sensitivity in piezoresistive sensors, to ensure υ ≥ 1 doping concentration (p) should be in the range: 1E18 cm-3 ≤ p ≤ 1E19 cm-3. Finally, we provide a set of design guidelines that will help NEMS engineers to optimize the performance of such sensors for chemical and biological sensing applications.

  2. Optimization of Biosorptive Removal of Dye from Aqueous System by Cone Shell of Calabrian Pine

    PubMed Central

    Deniz, Fatih

    2014-01-01

    The biosorption performance of raw cone shell of Calabrian pine for C.I. Basic Red 46 as a model azo dye from aqueous system was optimized using Taguchi experimental design methodology. L9 (33) orthogonal array was used to optimize the dye biosorption by the pine cone shell. The selected factors and their levels were biosorbent particle size, dye concentration, and contact time. The predicted dye biosorption capacity for the pine cone shell from Taguchi design was obtained as 71.770 mg g−1 under optimized biosorption conditions. This experimental design provided reasonable predictive performance of dye biosorption by the biosorbent (R 2: 0.9961). Langmuir model fitted better to the biosorption equilibrium data than Freundlich model. This displayed the monolayer coverage of dye molecules on the biosorbent surface. Dubinin-Radushkevich model and the standard Gibbs free energy change proposed physical biosorption for predominant mechanism. The logistic function presented the best fit to the data of biosorption kinetics. The kinetic parameters reflecting biosorption performance were also evaluated. The optimization study revealed that the pine cone shell can be an effective and economically feasible biosorbent for the removal of dye. PMID:25405213

  3. Computer-Simulation Surrogates for Optimization: Application to Trapezoidal Ducts and Axisymmetric Bodies

    NASA Technical Reports Server (NTRS)

    Otto, John C.; Paraschivoiu, Marius; Yesilyurt, Serhat; Patera, Anthony T.

    1995-01-01

    Engineering design and optimization efforts using computational systems rapidly become resource intensive. The goal of the surrogate-based approach is to perform a complete optimization with limited resources. In this paper we present a Bayesian-validated approach that informs the designer as to how well the surrogate performs; in particular, our surrogate framework provides precise (albeit probabilistic) bounds on the errors incurred in the surrogate-for-simulation substitution. The theory and algorithms of our computer{simulation surrogate framework are first described. The utility of the framework is then demonstrated through two illustrative examples: maximization of the flowrate of fully developed ow in trapezoidal ducts; and design of an axisymmetric body that achieves a target Stokes drag.

  4. Optimizing Sustainable Geothermal Heat Extraction

    NASA Astrophysics Data System (ADS)

    Patel, Iti; Bielicki, Jeffrey; Buscheck, Thomas

    2016-04-01

    Geothermal heat, though renewable, can be depleted over time if the rate of heat extraction exceeds the natural rate of renewal. As such, the sustainability of a geothermal resource is typically viewed as preserving the energy of the reservoir by weighing heat extraction against renewability. But heat that is extracted from a geothermal reservoir is used to provide a service to society and an economic gain to the provider of that service. For heat extraction used for market commodities, sustainability entails balancing the rate at which the reservoir temperature renews with the rate at which heat is extracted and converted into economic profit. We present a model for managing geothermal resources that combines simulations of geothermal reservoir performance with natural resource economics in order to develop optimal heat mining strategies. Similar optimal control approaches have been developed for managing other renewable resources, like fisheries and forests. We used the Non-isothermal Unsaturated-saturated Flow and Transport (NUFT) model to simulate the performance of a sedimentary geothermal reservoir under a variety of geologic and operational situations. The results of NUFT are integrated into the optimization model to determine the extraction path over time that maximizes the net present profit given the performance of the geothermal resource. Results suggest that the discount rate that is used to calculate the net present value of economic gain is a major determinant of the optimal extraction path, particularly for shallower and cooler reservoirs, where the regeneration of energy due to the natural geothermal heat flux is a smaller percentage of the amount of energy that is extracted from the reservoir.

  5. A method for performance comparison of polycentric knees and its application to the design of a knee for developing countries.

    PubMed

    Anand, T S; Sujatha, S

    2017-08-01

    Polycentric knees for transfemoral prostheses have a variety of geometries, but a survey of literature shows that there are few ways of comparing their performance. Our objective was to present a method for performance comparison of polycentric knee geometries and design a new geometry. In this work, we define parameters to compare various commercially available prosthetic knees in terms of their stability, toe clearance, maximum flexion, and so on and optimize the parameters to obtain a new knee design. We use the defined parameters and optimization to design a new knee geometry that provides the greater stability and toe clearance necessary to navigate uneven terrain which is typically encountered in developing countries. Several commercial knees were compared based on the defined parameters to determine their suitability for uneven terrain. A new knee was designed based on optimization of these parameters. Preliminary user testing indicates that the new knee is very stable and easy to use. The methodology can be used for better knee selection and design of more customized knee geometries. Clinical relevance The method provides a tool to aid in the selection and design of polycentric knees for transfemoral prostheses.

  6. QPSO-Based Adaptive DNA Computing Algorithm

    PubMed Central

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409

  7. Highly Sensitive Refractive Index Sensors with Plasmonic Nanoantennas-Utilization of Optimal Spectral Detuning of Fano Resonances.

    PubMed

    Mesch, Martin; Weiss, Thomas; Schäferling, Martin; Hentschel, Mario; Hegde, Ravi S; Giessen, Harald

    2018-05-25

    We analyze and optimize the performance of coupled plasmonic nanoantennas for refractive index sensing. The investigated structure supports a sub- and super-radiant mode that originates from the weak coupling of a dipolar and quadrupolar mode, resulting in a Fano-type spectral line shape. In our study, we vary the near-field coupling of the two modes and particularly examine the influence of the spectral detuning between them on the sensing performance. Surprisingly, the case of matched resonance frequencies does not provide the best sensor. Instead, we find that the right amount of coupling strength and spectral detuning allows for achieving the ideal combination of narrow line width and sufficient excitation strength of the subradiant mode, and therefore results in optimized sensor performance. Our findings are confirmed by experimental results and first-order perturbation theory. The latter is based on the resonant state expansion and provides direct access to resonance frequency shifts and line width changes as well as the excitation strength of the modes. Based on these parameters, we define a figure of merit that can be easily calculated for different sensing geometries and agrees well with the numerical and experimental results.

  8. Improving the photovoltaic performance of perovskite solar cells with acetate

    PubMed Central

    Zhao, Qian; Li, G. R.; Song, Jian; Zhao, Yulong; Qiang, Yinghuai; Gao, X. P.

    2016-01-01

    In an all-solid-state perovskite solar cell, methylammonium lead halide film is in charge of generating photo-excited electrons, thus its quality can directly influence the final photovoltaic performance of the solar cell. This paper accentuates a very simple chemical approach to improving the quality of a perovskite film with a suitable amount of acetic acid. With introduction of acetate ions, a homogeneous, continual and hole-free perovskite film comprised of high-crystallinity grains is obtained. UV-visible spectra, steady-state and time-resolved photoluminescence (PL) spectra reveal that the obtained perovskite film under the optimized conditions shows a higher light absorption, more efficient electron transport, and faster electron extraction to the adjoining electron transport layer. The features result in the optimized perovskite film can provide an improved short-circuit current. The corresponding solar cells with a planar configuration achieves an improved power conversion efficiency of 13.80%, and the highest power conversion efficiency in the photovoltaic measurements is up to 14.71%. The results not only provide a simple approach to optimizing perovskite films but also present a novel angle of view on fabricating high-performance perovskite solar cells. PMID:27934924

  9. Improving the photovoltaic performance of perovskite solar cells with acetate.

    PubMed

    Zhao, Qian; Li, G R; Song, Jian; Zhao, Yulong; Qiang, Yinghuai; Gao, X P

    2016-12-09

    In an all-solid-state perovskite solar cell, methylammonium lead halide film is in charge of generating photo-excited electrons, thus its quality can directly influence the final photovoltaic performance of the solar cell. This paper accentuates a very simple chemical approach to improving the quality of a perovskite film with a suitable amount of acetic acid. With introduction of acetate ions, a homogeneous, continual and hole-free perovskite film comprised of high-crystallinity grains is obtained. UV-visible spectra, steady-state and time-resolved photoluminescence (PL) spectra reveal that the obtained perovskite film under the optimized conditions shows a higher light absorption, more efficient electron transport, and faster electron extraction to the adjoining electron transport layer. The features result in the optimized perovskite film can provide an improved short-circuit current. The corresponding solar cells with a planar configuration achieves an improved power conversion efficiency of 13.80%, and the highest power conversion efficiency in the photovoltaic measurements is up to 14.71%. The results not only provide a simple approach to optimizing perovskite films but also present a novel angle of view on fabricating high-performance perovskite solar cells.

  10. Design Methods and Optimization for Morphing Aircraft

    NASA Technical Reports Server (NTRS)

    Crossley, William A.

    2005-01-01

    This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.

  11. Optimization and performance of the Robert Stobie Spectrograph Near-InfraRed detector system

    NASA Astrophysics Data System (ADS)

    Mosby, Gregory; Indahl, Briana; Eggen, Nathan; Wolf, Marsha; Hooper, Eric; Jaehnig, Kurt; Thielman, Don; Burse, Mahesh

    2018-01-01

    At the University of Wisconsin-Madison, we are building and testing the near-infrared (NIR) spectrograph for the Southern African Large Telescope-RSS-NIR. RSS-NIR will be an enclosed cooled integral field spectrograph. The RSS-NIR detector system uses a HAWAII-2RG (H2RG) HgCdTe detector from Teledyne controlled by the SIDECAR ASIC and an Inter-University Centre for Astronomy and Astrophysics (IUCCA) ISDEC card. We have successfully characterized and optimized the detector system and report on the optimization steps and performance of the system. We have reduced the CDS read noise to ˜20 e- for 200 kHz operation by optimizing ASIC settings. We show an additional factor of 3 reduction of read noise using Fowler sampling techniques and a factor of 2 reduction using up-the-ramp group sampling techniques. We also provide calculations to quantify the conditions for sky-limited observations using these sampling techniques.

  12. Wideband Scattering Diffusion by using Diffraction of Periodic Surfaces and Optimized Unit Cell Geometries

    PubMed Central

    Costa, Filippo; Monorchio, Agostino; Manara, Giuliano

    2016-01-01

    A methodology to obtain wideband scattering diffusion based on periodic artificial surfaces is presented. The proposed surfaces provide scattering towards multiple propagation directions across an extremely wide frequency band. They comprise unit cells with an optimized geometry and arranged in a periodic lattice characterized by a repetition period larger than one wavelength which induces the excitation of multiple Floquet harmonics. The geometry of the elementary unit cell is optimized in order to minimize the reflection coefficient of the fundamental Floquet harmonic over a wide frequency band. The optimization of FSS geometry is performed through a genetic algorithm in conjunction with periodic Method of Moments. The design method is verified through full-wave simulations and measurements. The proposed solution guarantees very good performance in terms of bandwidth-thickness ratio and removes the need of a high-resolution printing process. PMID:27181841

  13. Performance tradeoffs in static and dynamic load balancing strategies

    NASA Technical Reports Server (NTRS)

    Iqbal, M. A.; Saltz, J. H.; Bokhart, S. H.

    1986-01-01

    The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but sub-optimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the other three strategies.

  14. Autorotation flight control system

    NASA Technical Reports Server (NTRS)

    Bachelder, Edward N. (Inventor); Aponso, Bimal L. (Inventor); Lee, Dong-Chan (Inventor)

    2011-01-01

    The present invention provides computer implemented methodology that permits the safe landing and recovery of rotorcraft following engine failure. With this invention successful autorotations may be performed from well within the unsafe operating area of the height-velocity profile of a helicopter by employing the fast and robust real-time trajectory optimization algorithm that commands control motion through an intuitive pilot display, or directly in the case of autonomous rotorcraft. The algorithm generates optimal trajectories and control commands via the direct-collocation optimization method, solved using a nonlinear programming problem solver. The control inputs computed are collective pitch and aircraft pitch, which are easily tracked and manipulated by the pilot or converted to control actuator commands for automated operation during autorotation in the case of an autonomous rotorcraft. The formulation of the optimal control problem has been carefully tailored so the solutions resemble those of an expert pilot, accounting for the performance limitations of the rotorcraft and safety concerns.

  15. Display/control requirements for automated VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Hoffman, W. C.; Kleinman, D. L.; Young, L. R.

    1976-01-01

    A systematic design methodology for pilot displays in advanced commercial VTOL aircraft was developed and refined. The analyst is provided with a step-by-step procedure for conducting conceptual display/control configurations evaluations for simultaneous monitoring and control pilot tasks. The approach consists of three phases: formulation of information requirements, configuration evaluation, and system selection. Both the monitoring and control performance models are based upon the optimal control model of the human operator. Extensions to the conventional optimal control model required in the display design methodology include explicit optimization of control/monitoring attention; simultaneous monitoring and control performance predictions; and indifference threshold effects. The methodology was applied to NASA's experimental CH-47 helicopter in support of the VALT program. The CH-47 application examined the system performance of six flight conditions. Four candidate configurations are suggested for evaluation in pilot-in-the-loop simulations and eventual flight tests.

  16. Development of coin-type cell and engineering of its compartments for rechargeable seawater batteries

    NASA Astrophysics Data System (ADS)

    Han, Jinhyup; Hwang, Soo Min; Go, Wooseok; Senthilkumar, S. T.; Jeon, Donghoon; Kim, Youngsik

    2018-01-01

    Cell design and optimization of the components, including active materials and passive components, play an important role in constructing robust, high-performance rechargeable batteries. Seawater batteries, which utilize earth-abundant and natural seawater as the active material in an open-structured cathode, require a new platform for building and testing the cells other than typical Li-ion coin-type or pouch-type cells. Herein, we present new findings based on our optimized cell. Engineering the cathode components-improving the wettability of cathode current collector and seawater catholyte flow-improves the battery performance (voltage efficiency). Optimizing the cell component and design is the key to identifying the electrochemical processes and reactions of active materials. Hence, the outcome of this research can provide a systematic study of potentially active materials used in seawater batteries and their effectiveness on the electrochemical performance.

  17. Dichroic beamsplitter for high energy laser diagnostics

    DOEpatents

    LaFortune, Kai N [Livermore, CA; Hurd, Randall [Tracy, CA; Fochs, Scott N [Livermore, CA; Rotter, Mark D [San Ramon, CA; Hackel, Lloyd [Livermore, CA

    2011-08-30

    Wavefront control techniques are provided for the alignment and performance optimization of optical devices. A Shack-Hartmann wavefront sensor can be used to measure the wavefront distortion and a control system generates feedback error signal to optics inside the device to correct the wavefront. The system can be calibrated with a low-average-power probe laser. An optical element is provided to couple the optical device to a diagnostic/control package in a way that optimizes both the output power of the optical device and the coupling of the probe light into the diagnostics.

  18. Optimization of a multi-well array SERS chip

    NASA Astrophysics Data System (ADS)

    Abell, J. L.; Driskell, J. D.; Dluhy, R. A.; Tripp, R. A.; Zhao, Y.-P.

    2009-05-01

    SERS-active substrates are fabricated by oblique angle deposition and patterned by a polymer-molding technique to provide a uniform array for high throughput biosensing and multiplexing. Using a conventional SERS-active molecule, 1,2-Bis(4-pyridyl)ethylene (BPE), we show that this device provides a uniform Raman signal enhancement from well to well. The patterning technique employed in this study demonstrates a flexibility allowing for patterning control and customization, and performance optimization of the substrate. Avian influenza is analyzed to demonstrate the ability of this multi-well patterned SERS substrate for biosensing.

  19. OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE--A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnis Judzis

    2004-07-01

    This document details the progress to date on the ''OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE--A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING'' contract for the quarter starting April 2004 through June 2004. The DOE and TerraTek continue to wait for Novatek on the optimization portion of the testing program (they are completely rebuilding their fluid hammer). The latest indication is that the Novatek tool would be ready for retesting only 4Q 2004 or later. Smith International's hammer was tested in April of 2004 (2Q 2004 report). Accomplishments included the following: (1) TerraTek re-tested the ''optimized'' fluid hammermore » provided by Smith International during April 2004. Many improvements in mud hammer rates of penetration were noted over Phase 1 benchmark testing from November 2002. (2) Shell Exploration and Production in The Hague was briefed on various drilling performance projects including Task 8 ''Cutter Impact Testing''. Shell interest and willingness to assist in the test matrix as an Industry Advisor is appreciated. (3) TerraTek participated in a DOE/NETL Review meeting at Morgantown on April 15, 2004. The discussions were very helpful and a program related to the Mud Hammer optimization project was noted--Terralog modeling work on percussion tools. (4) Terralog's Dr. Gang Han witnessed some of the full-scale optimization testing of the Smith International hammer in order to familiarize him with downhole tools. TerraTek recommends that modeling first start with single cutters/inserts and progress in complexity. (5) The final equipment problem on the impact testing task was resolved through the acquisition of a high data rate laser based displacement instrument. (6) TerraTek provided Novatek much engineering support for the future re-testing of their optimized tool. Work was conducted on slip ring [electrical] specifications and tool collar sealing in the testing vessel with a reconfigured flow system on Novatek's collar.« less

  20. Design optimization of sinusoidal glass honeycomb for flat plate solar collectors

    NASA Technical Reports Server (NTRS)

    Mcmurrin, J. C.; Buchberg, H.

    1980-01-01

    The design of honeycomb made of sinusoidally corrugated glass strips was optimized for use in water-cooled, single-glazed flat plate solar collectors with non-selective black absorbers. Cell diameter (d), cell height (L), and pitch/diameter ratio (P/d) maximizing solar collector performance and cost effectiveness for given cell wall thickness (t sub w) and optical properties of glass were determined from radiative and convective honeycomb characteristics and collector performance all calculated with experimentally validated algorithms. Relative lifetime values were estimated from present materials costs and postulated production methods for corrugated glass honeycomb cover assemblies. A honeycomb with P/d = 1.05, d = 17.4 mm, L = 146 mm and t sub w = 0.15 mm would provide near-optimal performance over the range delta T sub C greater than or equal to 0 C and less than or equal to 80 C and be superior in performance and cost effectiveness to a non-honeycomb collector with a 0.92/0.12 selective black absorber.

  1. Performance Enhancing Diets and the PRISE Protocol to Optimize Athletic Performance

    PubMed Central

    Arciero, Paul J.; Ward, Emery

    2015-01-01

    The training regimens of modern-day athletes have evolved from the sole emphasis on a single fitness component (e.g., endurance athlete or resistance/strength athlete) to an integrative, multimode approach encompassing all four of the major fitness components: resistance (R), interval sprints (I), stretching (S), and endurance (E) training. Athletes rarely, if ever, focus their training on only one mode of exercise but instead routinely engage in a multimode training program. In addition, timed-daily protein (P) intake has become a hallmark for all athletes. Recent studies, including from our laboratory, have validated the effectiveness of this multimode paradigm (RISE) and protein-feeding regimen, which we have collectively termed PRISE. Unfortunately, sports nutrition recommendations and guidelines have lagged behind the PRISE integrative nutrition and training model and therefore limit an athletes' ability to succeed. Thus, it is the purpose of this review to provide a clearly defined roadmap linking specific performance enhancing diets (PEDs) with each PRISE component to facilitate optimal nourishment and ultimately optimal athletic performance. PMID:25949823

  2. Design considerations of high-performance InGaAs/InP single-photon avalanche diodes for quantum key distribution.

    PubMed

    Ma, Jian; Bai, Bing; Wang, Liu-Jun; Tong, Cun-Zhu; Jin, Ge; Zhang, Jun; Pan, Jian-Wei

    2016-09-20

    InGaAs/InP single-photon avalanche diodes (SPADs) are widely used in practical applications requiring near-infrared photon counting such as quantum key distribution (QKD). Photon detection efficiency and dark count rate are the intrinsic parameters of InGaAs/InP SPADs, due to the fact that their performances cannot be improved using different quenching electronics given the same operation conditions. After modeling these parameters and developing a simulation platform for InGaAs/InP SPADs, we investigate the semiconductor structure design and optimization. The parameters of photon detection efficiency and dark count rate highly depend on the variables of absorption layer thickness, multiplication layer thickness, excess bias voltage, and temperature. By evaluating the decoy-state QKD performance, the variables for SPAD design and operation can be globally optimized. Such optimization from the perspective of specific applications can provide an effective approach to design high-performance InGaAs/InP SPADs.

  3. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  4. Neural Net-Based Redesign of Transonic Turbines for Improved Unsteady Aerodynamic Performance

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Rai, Man Mohan; Huber, Frank W.

    1998-01-01

    A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology (RSM) and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The optimization procedure yields a modified design that improves the aerodynamic performance through small changes to the reference design geometry. The computed results demonstrate the capabilities of the neural net-based design procedure, and also show the tremendous advantages that can be gained by including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.

  5. A Robust Design Methodology for Optimal Microscale Secondary Flow Control in Compact Inlet Diffusers

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Keller, Dennis J.

    2001-01-01

    It is the purpose of this study to develop an economical Robust design methodology for microscale secondary flow control in compact inlet diffusers. To illustrate the potential of economical Robust Design methodology, two different mission strategies were considered for the subject inlet, namely Maximum Performance and Maximum HCF Life Expectancy. The Maximum Performance mission maximized total pressure recovery while the Maximum HCF Life Expectancy mission minimized the mean of the first five Fourier harmonic amplitudes, i.e., 'collectively' reduced all the harmonic 1/2 amplitudes of engine face distortion. Each of the mission strategies was subject to a low engine face distortion constraint, i.e., DC60<0.10, which is a level acceptable for commercial engines. For each of these missions strategies, an 'Optimal Robust' (open loop control) and an 'Optimal Adaptive' (closed loop control) installation was designed over a twenty degree angle-of-incidence range. The Optimal Robust installation used economical Robust Design methodology to arrive at a single design which operated over the entire angle-of-incident range (open loop control). The Optimal Adaptive installation optimized all the design parameters at each angle-of-incidence. Thus, the Optimal Adaptive installation would require a closed loop control system to sense a proper signal for each effector and modify that effector device, whether mechanical or fluidic, for optimal inlet performance. In general, the performance differences between the Optimal Adaptive and Optimal Robust installation designs were found to be marginal. This suggests, however, that Optimal Robust open loop installation designs can be very competitive with Optimal Adaptive close loop designs. Secondary flow control in inlets is inherently robust, provided it is optimally designed. Therefore, the new methodology presented in this paper, combined array 'Lower Order' approach to Robust DOE, offers the aerodynamicist a very viable and economical way of exploring the concept of Robust inlet design, where the mission variables are brought directly into the inlet design process and insensitivity or robustness to the mission variables becomes a design objective.

  6. Computer-oriented synthesis of wide-band non-uniform negative resistance amplifiers

    NASA Technical Reports Server (NTRS)

    Branner, G. R.; Chan, S.-P.

    1975-01-01

    This paper presents a synthesis procedure which provides design values for broad-band amplifiers using non-uniform negative resistance devices. Employing a weighted least squares optimization scheme, the technique, based on an extension of procedures for uniform negative resistance devices, is capable of providing designs for a variety of matching network topologies. It also provides, for the first time, quantitative results for predicting the effects of parameter element variations on overall amplifier performance. The technique is also unique in that it employs exact partial derivatives for optimization and sensitivity computation. In comparison with conventional procedures, significantly improved broad-band designs are shown to result.

  7. Performance Assessment for Pump-and-Treat Closure or Transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Truex, Michael J.; Johnson, Christian D.; Becker, Dave J.

    2015-09-29

    A structured performance assessment approach is useful to evaluate pump-and-treat (P&T) groundwater remediation, which has been applied at numerous sites. Consistent with the U.S. Environmental Protection Agency’s Groundwater Road Map, performance assessment during remedy implementation may be needed, and should consider remedy optimization, transition to alternative remedies, or remedy closure. In addition, a recent National Research Council study examined groundwater remediation at complex contaminated sites and concluded that it may be beneficial to evaluate remedy performance and the potential need for transition to alternative approaches at these sites. The intent of this document is to provide a structured approach formore » assessing P&T performance to support a decision to optimize, transition, or close a P&T remedy. The process presented in this document for gathering information and performing evaluations to support P&T remedy decisions includes use of decision elements to distinguish between potential outcomes of a remedy decision. Case studies are used to augment descriptions of decision elements and to illustrate each type of outcome identified in the performance assessment approach. The document provides references to resources for tools and other guidance relevant to conducting the P&T assessment.« less

  8. An integrated radar model solution for mission level performance and cost trades

    NASA Astrophysics Data System (ADS)

    Hodge, John; Duncan, Kerron; Zimmerman, Madeline; Drupp, Rob; Manno, Mike; Barrett, Donald; Smith, Amelia

    2017-05-01

    A fully integrated Mission-Level Radar model is in development as part of a multi-year effort under the Northrop Grumman Mission Systems (NGMS) sector's Model Based Engineering (MBE) initiative to digitally interconnect and unify previously separate performance and cost models. In 2016, an NGMS internal research and development (IR and D) funded multidisciplinary team integrated radio frequency (RF), power, control, size, weight, thermal, and cost models together using a commercial-off-the-shelf software, ModelCenter, for an Active Electronically Scanned Array (AESA) radar system. Each represented model was digitally connected with standard interfaces and unified to allow end-to-end mission system optimization and trade studies. The radar model was then linked to the Air Force's own mission modeling framework (AFSIM). The team first had to identify the necessary models, and with the aid of subject matter experts (SMEs) understand and document the inputs, outputs, and behaviors of the component models. This agile development process and collaboration enabled rapid integration of disparate models and the validation of their combined system performance. This MBE framework will allow NGMS to design systems more efficiently and affordably, optimize architectures, and provide increased value to the customer. The model integrates detailed component models that validate cost and performance at the physics level with high-level models that provide visualization of a platform mission. This connectivity of component to mission models allows hardware and software design solutions to be better optimized to meet mission needs, creating cost-optimal solutions for the customer, while reducing design cycle time through risk mitigation and early validation of design decisions.

  9. Understanding and mimicking the dual optimality of the fly ear

    NASA Astrophysics Data System (ADS)

    Liu, Haijun; Currano, Luke; Gee, Danny; Helms, Tristan; Yu, Miao

    2013-08-01

    The fly Ormia ochracea has the remarkable ability, given an eardrum separation of only 520 μm, to pinpoint the 5 kHz chirp of its cricket host. Previous research showed that the two eardrums are mechanically coupled, which amplifies the directional cues. We have now performed a mechanics and optimization analysis which reveals that the right coupling strength is key: it results in simultaneously optimized directional sensitivity and directional cue linearity at 5 kHz. We next demonstrated that this dual optimality is replicable in a synthetic device and can be tailored for a desired frequency. Finally, we demonstrated a miniature sensor endowed with this dual-optimality at 8 kHz with unparalleled sound localization. This work provides a quantitative and mechanistic explanation for the fly's sound-localization ability from a new perspective, and it provides a framework for the development of fly-ear inspired sensors to overcoming a previously-insurmountable size constraint in engineered sound-localization systems.

  10. Integrated optimisation technique based on computer-aided capacity and safety evaluation for managing downstream lane-drop merging area of signalised junctions

    NASA Astrophysics Data System (ADS)

    Chen, CHAI; Yiik Diew, WONG

    2017-02-01

    This study provides an integrated strategy, encompassing microscopic simulation, safety assessment, and multi-attribute decision-making, to optimize traffic performance at downstream merging area of signalized intersections. A Fuzzy Cellular Automata (FCA) model is developed to replicate microscopic movement and merging behavior. Based on simulation experiment, the proposed FCA approach is able to provide capacity and safety evaluation of different traffic scenarios. The results are then evaluated through data envelopment analysis (DEA) and analytic hierarchy process (AHP). Optimized geometric layout and control strategies are then suggested for various traffic conditions. An optimal lane-drop distance that is dependent on traffic volume and speed limit can thus be established at the downstream merging area.

  11. Surrogate-based Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin

    2005-01-01

    A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.

  12. REopt: A Platform for Energy System Integration and Optimization: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpkins, T.; Cutler, D.; Anderson, K.

    2014-08-01

    REopt is NREL's energy planning platform offering concurrent, multi-technology integration and optimization capabilities to help clients meet their cost savings and energy performance goals. The REopt platform provides techno-economic decision-support analysis throughout the energy planning process, from agency-level screening and macro planning to project development to energy asset operation. REopt employs an integrated approach to optimizing a site?s energy costs by considering electricity and thermal consumption, resource availability, complex tariff structures including time-of-use, demand and sell-back rates, incentives, net-metering, and interconnection limits. Formulated as a mixed integer linear program, REopt recommends an optimally-sized mix of conventional and renewable energy, andmore » energy storage technologies; estimates the net present value associated with implementing those technologies; and provides the cost-optimal dispatch strategy for operating them at maximum economic efficiency. The REopt platform can be customized to address a variety of energy optimization scenarios including policy, microgrid, and operational energy applications. This paper presents the REopt techno-economic model along with two examples of recently completed analysis projects.« less

  13. Multidisciplinary Concurrent Design Optimization via the Internet

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Kelkar, Atul G.; Koganti, Gopichand

    2001-01-01

    A methodology is presented which uses commercial design and analysis software and the Internet to perform concurrent multidisciplinary optimization. The methodology provides a means to develop multidisciplinary designs without requiring that all software be accessible from the same local network. The procedures are amenable to design and development teams whose members, expertise and respective software are not geographically located together. This methodology facilitates multidisciplinary teams working concurrently on a design problem of common interest. Partition of design software to different machines allows each constituent software to be used on the machine that provides the most economy and efficiency. The methodology is demonstrated on the concurrent design of a spacecraft structure and attitude control system. Results are compared to those derived from performing the design with an autonomous FORTRAN program.

  14. Dataset of working conditions and thermo-economic performances for hybrid organic Rankine plants fed by solar and low-grade energy sources.

    PubMed

    Scardigno, Domenico; Fanelli, Emanuele; Viggiano, Annarita; Braccio, Giacobbe; Magi, Vinicio

    2016-06-01

    This article provides the dataset of operating conditions of a hybrid organic Rankine plant generated by the optimization procedure employed in the research article "A genetic optimization of a hybrid organic Rankine plant for solar and low-grade energy sources" (Scardigno et al., 2015) [1]. The methodology used to obtain the data is described. The operating conditions are subdivided into two separate groups: feasible and unfeasible solutions. In both groups, the values of the design variables are given. Besides, the subset of feasible solutions is described in details, by providing the thermodynamic and economic performances, the temperatures at some characteristic sections of the thermodynamic cycle, the net power, the absorbed powers and the area of the heat exchange surfaces.

  15. Lower Emittance Lattice for the Advanced Photon Source Upgrade Using Reverse Bending Magnets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borland, M.; Berenc, T.; Sun, Y.

    The Advanced Photon Source (APS) is pursuing an upgrade to the storage ring to a hybrid seven-bend-achromat design [1]. The nominal design provides a natural emittance of 67 pm [2]. By adding reverse dipole fields to several quadrupoles [3, 4] we can reduce the natural emittance to 41 pm while simultaneously providing more optimal beta functions in the insertion devices and increasing the dispersion function at the chromaticity sextupole magnets. The improved emittance results from a combination of increased energy loss per turn and a change in the damping partition. At the same time, the nonlinear dynamics performance is verymore » similar, thanks in part to increased dispersion in the sextupoles. This paper describes the properties, optimization, and performance of the new lattice.« less

  16. Optimal Parameter Design of Coarse Alignment for Fiber Optic Gyro Inertial Navigation System.

    PubMed

    Lu, Baofeng; Wang, Qiuying; Yu, Chunmei; Gao, Wei

    2015-06-25

    Two different coarse alignment algorithms for Fiber Optic Gyro (FOG) Inertial Navigation System (INS) based on inertial reference frame are discussed in this paper. Both of them are based on gravity vector integration, therefore, the performance of these algorithms is determined by integration time. In previous works, integration time is selected by experience. In order to give a criterion for the selection process, and make the selection of the integration time more accurate, optimal parameter design of these algorithms for FOG INS is performed in this paper. The design process is accomplished based on the analysis of the error characteristics of these two coarse alignment algorithms. Moreover, this analysis and optimal parameter design allow us to make an adequate selection of the most accurate algorithm for FOG INS according to the actual operational conditions. The analysis and simulation results show that the parameter provided by this work is the optimal value, and indicate that in different operational conditions, the coarse alignment algorithms adopted for FOG INS are different in order to achieve better performance. Lastly, the experiment results validate the effectiveness of the proposed algorithm.

  17. On the Improvement of Convergence Performance for Integrated Design of Wind Turbine Blade Using a Vector Dominating Multi-objective Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, L.; Wang, T. G.; Wu, J. H.; Cheng, G. P.

    2016-09-01

    A novel multi-objective optimization algorithm incorporating evolution strategies and vector mechanisms, referred as VD-MOEA, is proposed and applied in aerodynamic- structural integrated design of wind turbine blade. In the algorithm, a set of uniformly distributed vectors is constructed to guide population in moving forward to the Pareto front rapidly and maintain population diversity with high efficiency. For example, two- and three- objective designs of 1.5MW wind turbine blade are subsequently carried out for the optimization objectives of maximum annual energy production, minimum blade mass, and minimum extreme root thrust. The results show that the Pareto optimal solutions can be obtained in one single simulation run and uniformly distributed in the objective space, maximally maintaining the population diversity. In comparison to conventional evolution algorithms, VD-MOEA displays dramatic improvement of algorithm performance in both convergence and diversity preservation for handling complex problems of multi-variables, multi-objectives and multi-constraints. This provides a reliable high-performance optimization approach for the aerodynamic-structural integrated design of wind turbine blade.

  18. A method for obtaining reduced-order control laws for high-order systems using optimization techniques

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.; Newsom, J. R.; Abel, I.

    1981-01-01

    A method of synthesizing reduced-order optimal feedback control laws for a high-order system is developed. A nonlinear programming algorithm is employed to search for the control law design variables that minimize a performance index defined by a weighted sum of mean-square steady-state responses and control inputs. An analogy with the linear quadractic Gaussian solution is utilized to select a set of design variables and their initial values. To improve the stability margins of the system, an input-noise adjustment procedure is used in the design algorithm. The method is applied to the synthesis of an active flutter-suppression control law for a wind tunnel model of an aeroelastic wing. The reduced-order controller is compared with the corresponding full-order controller and found to provide nearly optimal performance. The performance of the present method appeared to be superior to that of two other control law order-reduction methods. It is concluded that by using the present algorithm, nearly optimal low-order control laws with good stability margins can be synthesized.

  19. Design and Analysis of Optimization Algorithms to Minimize Cryptographic Processing in BGP Security Protocols.

    PubMed

    Sriram, Vinay K; Montgomery, Doug

    2017-07-01

    The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.

  20. Computation of optimal output-feedback compensators for linear time-invariant systems

    NASA Technical Reports Server (NTRS)

    Platzman, L. K.

    1972-01-01

    The control of linear time-invariant systems with respect to a quadratic performance criterion was considered, subject to the constraint that the control vector be a constant linear transformation of the output vector. The optimal feedback matrix, f*, was selected to optimize the expected performance, given the covariance of the initial state. It is first shown that the expected performance criterion can be expressed as the ratio of two multinomials in the element of f. This expression provides the basis for a feasible method of determining f* in the case of single-input single-output systems. A number of iterative algorithms are then proposed for the calculation of f* for multiple input-output systems. For two of these, monotone convergence is proved, but they involve the solution of nonlinear matrix equations at each iteration. Another is proposed involving the solution of Lyapunov equations at each iteration, and the gradual increase of the magnitude of a penalty function. Experience with this algorithm will be needed to determine whether or not it does, indeed, possess desirable convergence properties, and whether it can be used to determine the globally optimal f*.

  1. Workflow management in large distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  2. RBF neural network based PI pitch controller for a class of 5-MW wind turbines using particle swarm optimization algorithm.

    PubMed

    Poultangari, Iman; Shahnazi, Reza; Sheikhan, Mansour

    2012-09-01

    In order to control the pitch angle of blades in wind turbines, commonly the proportional and integral (PI) controller due to its simplicity and industrial usability is employed. The neural networks and evolutionary algorithms are tools that provide a suitable ground to determine the optimal PI gains. In this paper, a radial basis function (RBF) neural network based PI controller is proposed for collective pitch control (CPC) of a 5-MW wind turbine. In order to provide an optimal dataset to train the RBF neural network, particle swarm optimization (PSO) evolutionary algorithm is used. The proposed method does not need the complexities, nonlinearities and uncertainties of the system under control. The simulation results show that the proposed controller has satisfactory performance. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Predictive optimized adaptive PSS in a single machine infinite bus.

    PubMed

    Milla, Freddy; Duarte-Mermoud, Manuel A

    2016-07-01

    Power System Stabilizer (PSS) devices are responsible for providing a damping torque component to generators for reducing fluctuations in the system caused by small perturbations. A Predictive Optimized Adaptive PSS (POA-PSS) to improve the oscillations in a Single Machine Infinite Bus (SMIB) power system is discussed in this paper. POA-PSS provides the optimal design parameters for the classic PSS using an optimization predictive algorithm, which adapts to changes in the inputs of the system. This approach is part of small signal stability analysis, which uses equations in an incremental form around an operating point. Simulation studies on the SMIB power system illustrate that the proposed POA-PSS approach has better performance than the classical PSS. In addition, the effort in the control action of the POA-PSS is much less than that of other approaches considered for comparison. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  5. Topology optimization under stochastic stiffness

    NASA Astrophysics Data System (ADS)

    Asadpoure, Alireza

    Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.

  6. Thermal/structural Tailoring of Engine Blades (T/SEAEBL). Theoretical Manual

    NASA Technical Reports Server (NTRS)

    Brown, K. W.; Clevenger, W. B.

    1994-01-01

    The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a family of computer programs executed by a control program. The T/STAEBL system performs design optimizations of cooled, hollow turbine blades and vanes. This manual describes the T/STAEBL data block structure and system organization. The approximate analysis and optimization modules are detailed, and a validation test case is provided.

  7. Thermal/structural tailoring of engine blades (T/SEAEBL). Theoretical manual

    NASA Astrophysics Data System (ADS)

    Brown, K. W.; Clevenger, W. B.

    1994-03-01

    The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a family of computer programs executed by a control program. The T/STAEBL system performs design optimizations of cooled, hollow turbine blades and vanes. This manual describes the T/STAEBL data block structure and system organization. The approximate analysis and optimization modules are detailed, and a validation test case is provided.

  8. Development of Response Surface Models for Rapid Analysis & Multidisciplinary Optimization of Launch Vehicle Design Concepts

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1999-01-01

    Multdisciplinary design optimization (MDO) is an important step in the design and evaluation of launch vehicles, since it has a significant impact on performance and lifecycle cost. The objective in MDO is to search the design space to determine the values of design parameters that optimize the performance characteristics subject to system constraints. Vehicle Analysis Branch (VAB) at NASA Langley Research Center has computerized analysis tools in many of the disciplines required for the design and analysis of launch vehicles. Vehicle performance characteristics can be determined by the use of these computerized analysis tools. The next step is to optimize the system performance characteristics subject to multidisciplinary constraints. However, most of the complex sizing and performance evaluation codes used for launch vehicle design are stand-alone tools, operated by disciplinary experts. They are, in general, difficult to integrate and use directly for MDO. An alternative has been to utilize response surface methodology (RSM) to obtain polynomial models that approximate the functional relationships between performance characteristics and design variables. These approximation models, called response surface models, are then used to integrate the disciplines using mathematical programming methods for efficient system level design analysis, MDO and fast sensitivity simulations. A second-order response surface model of the form given has been commonly used in RSM since in many cases it can provide an adequate approximation especially if the region of interest is sufficiently limited.

  9. Evaluating Thermodynamic Integration Performance of the New Amber Molecular Dynamics Package and Assess Potential Halogen Bonds of Enoyl-ACP Reductase (FabI) Benzimidazole Inhibitors

    PubMed Central

    Su, Pin-Chih; Johnson, Michael E.

    2015-01-01

    Thermodynamic integration (TI) can provide accurate binding free energy insights in a lead optimization program, but its high computational expense has limited its usage. In the effort of developing an efficient and accurate TI protocol for FabI inhibitors lead optimization program, we carefully compared TI with different Amber molecular dynamics (MD) engines (sander and pmemd), MD simulation lengths, the number of intermediate states and transformation steps, and the Lennard-Jones and Coulomb Softcore potentials parameters in the one-step TI, using eleven benzimidazole inhibitors in complex with Francisella tularensis enoyl acyl reductase (FtFabI). To our knowledge, this is the first study to extensively test the new AMBER MD engine, pmemd, on TI and compare the parameters of the Softcore potentials in the one-step TI in a protein-ligand binding system. The best performing model, the one-step pmemd TI, using 6 intermediate states and 1 ns MD simulations, provides better agreement with experimental results (RMSD = 0.52 kcal/mol) than the best performing implicit solvent method, QM/MM-GBSA from our previous study (RMSD = 3.00 kcal/mol), while maintaining similar efficiency. Briefly, we show the optimized TI protocol to be highly accurate and affordable for the FtFabI system. This approach can be implemented in a larger scale benzimidazole scaffold lead optimization against FtFabI. Lastly, the TI results here also provide structure-activity relationship insights, and suggest the para-halogen in benzimidazole compounds might form a weak halogen bond with FabI, which is a well-known halogen bond favoring enzyme. PMID:26666582

  10. Evaluating thermodynamic integration performance of the new amber molecular dynamics package and assess potential halogen bonds of enoyl-ACP reductase (FabI) benzimidazole inhibitors.

    PubMed

    Su, Pin-Chih; Johnson, Michael E

    2016-04-05

    Thermodynamic integration (TI) can provide accurate binding free energy insights in a lead optimization program, but its high computational expense has limited its usage. In the effort of developing an efficient and accurate TI protocol for FabI inhibitors lead optimization program, we carefully compared TI with different Amber molecular dynamics (MD) engines (sander and pmemd), MD simulation lengths, the number of intermediate states and transformation steps, and the Lennard-Jones and Coulomb Softcore potentials parameters in the one-step TI, using eleven benzimidazole inhibitors in complex with Francisella tularensis enoyl acyl reductase (FtFabI). To our knowledge, this is the first study to extensively test the new AMBER MD engine, pmemd, on TI and compare the parameters of the Softcore potentials in the one-step TI in a protein-ligand binding system. The best performing model, the one-step pmemd TI, using 6 intermediate states and 1 ns MD simulations, provides better agreement with experimental results (RMSD = 0.52 kcal/mol) than the best performing implicit solvent method, QM/MM-GBSA from our previous study (RMSD = 3.00 kcal/mol), while maintaining similar efficiency. Briefly, we show the optimized TI protocol to be highly accurate and affordable for the FtFabI system. This approach can be implemented in a larger scale benzimidazole scaffold lead optimization against FtFabI. Lastly, the TI results here also provide structure-activity relationship insights, and suggest the parahalogen in benzimidazole compounds might form a weak halogen bond with FabI, which is a well-known halogen bond favoring enzyme. © 2015 Wiley Periodicals, Inc.

  11. Professional Development through Organizational Assessment: Using APPA's Facilities Management Evaluation Program

    ERIC Educational Resources Information Center

    Medlin, E. Lander; Judd, R. Holly

    2013-01-01

    APPA's Facilities Management Evaluation Program (FMEP) provides an integrated system to optimize organizational performance. The criteria for evaluation not only provide a tool for organizational continuous improvement, they serve as a compelling leadership development tool essential for today's facilities management professional. The senior…

  12. Chasing a Comet with a Solar Sail

    NASA Technical Reports Server (NTRS)

    Stough, Robert W.; Heaton, Andrew F.; Whorton, Mark S.

    2008-01-01

    Solar sail propulsion systems enable a wide range of missions that require constant thrust or high delta-V over long mission times. One particularly challenging mission type is a comet rendezvous mission. This paper presents optimal low-thrust trajectory designs for a range of sailcraft performance metrics and mission transit times that enables a comet rendezvous mission. These optimal trajectory results provide a trade space which can be parameterized in terms of mission duration and sailcraft performance parameters such that a design space for a small satellite comet chaser mission is identified. These results show that a feasible space exists for a small satellite to perform a comet chaser mission in a reasonable mission time.

  13. Optimal design of high-speed loading spindle based on ABAQUS

    NASA Astrophysics Data System (ADS)

    Yang, Xudong; Dong, Yu; Ge, Qingkuan; Yang, Hai

    2017-12-01

    The three-dimensional model of high-speed loading spindle is established by using ABAQUS’s modeling module. A finite element analysis model of high-speed loading spindle was established by using spring element to simulate bearing boundary condition. The static and dynamic performance of the spindle structure with different specifications of the rectangular spline and the different diameter neck of axle are studied in depth, and the influence of different spindle span on the static and dynamic performance of the high-speed loading spindle is studied. Finally, the optimal structure of the high-speed loading spindle is obtained. The results provide a theoretical basis for improving the overall performance of the test-bed

  14. Modeling human decision making behavior in supervisory control

    NASA Technical Reports Server (NTRS)

    Tulga, M. K.; Sheridan, T. B.

    1977-01-01

    An optimal decision control model was developed, which is based primarily on a dynamic programming algorithm which looks at all the available task possibilities, charts an optimal trajectory, and commits itself to do the first step (i.e., follow the optimal trajectory during the next time period), and then iterates the calculation. A Bayesian estimator was included which estimates the tasks which might occur in the immediate future and provides this information to the dynamic programming routine. Preliminary trials comparing the human subject's performance to that of the optimal model show a great similarity, but indicate that the human skips certain movements which require quick change in strategy.

  15. Intercom Melodramas Deliver a Reading Message in an Entertaining Way!

    ERIC Educational Resources Information Center

    Brazeau, Martin H.

    1995-01-01

    Provides a guide for helping students perform melodramas via the intercom. The benefits include motivating students to read and heightening student self-esteem. Suggests five elements for a successful performance: a clearly written script, rehearsals, optimal voice levels and microphone distance, an introduction by the principal, and…

  16. Neighboring Optimal Aircraft Guidance in a General Wind Environment

    NASA Technical Reports Server (NTRS)

    Jardin, Matthew R. (Inventor)

    2003-01-01

    Method and system for determining an optimal route for an aircraft moving between first and second waypoints in a general wind environment. A selected first wind environment is analyzed for which a nominal solution can be determined. A second wind environment is then incorporated; and a neighboring optimal control (NOC) analysis is performed to estimate an optimal route for the second wind environment. In particular examples with flight distances of 2500 and 6000 nautical miles in the presence of constant or piecewise linearly varying winds, the difference in flight time between a nominal solution and an optimal solution is 3.4 to 5 percent. Constant or variable winds and aircraft speeds can be used. Updated second wind environment information can be provided and used to obtain an updated optimal route.

  17. Human performance on visually presented Traveling Salesman problems.

    PubMed

    Vickers, D; Butavicius, M; Lee, M; Medvedev, A

    2001-01-01

    Little research has been carried out on human performance in optimization problems, such as the Traveling Salesman problem (TSP). Studies by Polivanova (1974, Voprosy Psikhologii, 4, 41-51) and by MacGregor and Ormerod (1996, Perception & Psychophysics, 58, 527-539) suggest that: (1) the complexity of solutions to visually presented TSPs depends on the number of points on the convex hull; and (2) the perception of optimal structure is an innate tendency of the visual system, not subject to individual differences. Results are reported from two experiments. In the first, measures of the total length and completion speed of pathways, and a measure of path uncertainty were compared with optimal solutions produced by an elastic net algorithm and by several heuristic methods. Performance was also compared under instructions to draw the shortest or the most attractive pathway. In the second, various measures of performance were compared with scores on Raven's advanced progressive matrices (APM). The number of points on the convex hull did not determine the relative optimality of solutions, although both this factor and the total number of points influenced solution speed and path uncertainty. Subjects' solutions showed appreciable individual differences, which had a strong correlation with APM scores. The relation between perceptual organization and the process of solving visually presented TSPs is briefly discussed, as is the potential of optimization for providing a conceptual framework for the study of intelligence.

  18. Optimizing RF gun cavity geometry within an automated injector design system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alicia Hofler ,Pavel Evtushenko

    2011-03-28

    RF guns play an integral role in the success of several light sources around the world, and properly designed and optimized cw superconducting RF (SRF) guns can provide a path to higher average brightness. As the need for these guns grows, it is important to have automated optimization software tools that vary the geometry of the gun cavity as part of the injector design process. This will allow designers to improve existing designs for present installations, extend the utility of these guns to other applications, and develop new designs. An evolutionary algorithm (EA) based system can provide this capability becausemore » EAs can search in parallel a large parameter space (often non-linear) and in a relatively short time identify promising regions of the space for more careful consideration. The injector designer can then evaluate more cavity design parameters during the injector optimization process against the beam performance requirements of the injector. This paper will describe an extension to the APISA software that allows the cavity geometry to be modified as part of the injector optimization and provide examples of its application to existing RF and SRF gun designs.« less

  19. StagBL : A Scalable, Portable, High-Performance Discretization and Solver Layer for Geodynamic Simulation

    NASA Astrophysics Data System (ADS)

    Sanan, P.; Tackley, P. J.; Gerya, T.; Kaus, B. J. P.; May, D.

    2017-12-01

    StagBL is an open-source parallel solver and discretization library for geodynamic simulation,encapsulating and optimizing operations essential to staggered-grid finite volume Stokes flow solvers.It provides a parallel staggered-grid abstraction with a high-level interface in C and Fortran.On top of this abstraction, tools are available to define boundary conditions and interact with particle systems.Tools and examples to efficiently solve Stokes systems defined on the grid are provided in small (direct solver), medium (simple preconditioners), and large (block factorization and multigrid) model regimes.By working directly with leading application codes (StagYY, I3ELVIS, and LaMEM) and providing an API and examples to integrate with others, StagBL aims to become a community tool supplying scalable, portable, reproducible performance toward novel science in regional- and planet-scale geodynamics and planetary science.By implementing kernels used by many research groups beneath a uniform abstraction layer, the library will enable optimization for modern hardware, thus reducing community barriers to large- or extreme-scale parallel simulation on modern architectures. In particular, the library will include CPU-, Manycore-, and GPU-optimized variants of matrix-free operators and multigrid components.The common layer provides a framework upon which to introduce innovative new tools.StagBL will leverage p4est to provide distributed adaptive meshes, and incorporate a multigrid convergence analysis tool.These options, in addition to a wealth of solver options provided by an interface to PETSc, will make the most modern solution techniques available from a common interface. StagBL in turn provides a PETSc interface, DMStag, to its central staggered grid abstraction.We present public version 0.5 of StagBL, including preliminary integration with application codes and demonstrations with its own demonstration application, StagBLDemo. Central to StagBL is the notion of an uninterrupted pipeline from toy/teaching codes to high-performance, extreme-scale solves. StagBLDemo replicates the functionality of an advanced MATLAB-style regional geodynamics code, thus providing users with a concrete procedure to exceed the performance and scalability limitations of smaller-scale tools.

  20. Optimal Diabatic Dynamics of Majoarana-based Topological Qubits

    NASA Astrophysics Data System (ADS)

    Seradjeh, Babak; Rahmani, Armin; Franz, Marcel

    In topological quantum computing, unitary operations on qubits are performed by adiabatic braiding of non-Abelian quasiparticles such as Majorana zero modes and are protected from local environmental perturbations. This scheme requires slow operations. By using the Pontryagin's maximum principle, here we show the same quantum gates can be implemented in much shorter times through optimal diabatic pulses. While our fast diabatic gates no not enjoy topological protection, they provide significant practical advantages due to their optimal speed and remarkable robustness to calibration errors and noise. NSERC, CIfAR, NSF DMR- 1350663, BSF 2014345.

  1. Constrained optimization of sequentially generated entangled multiqubit states

    NASA Astrophysics Data System (ADS)

    Saberi, Hamed; Weichselbaum, Andreas; Lamata, Lucas; Pérez-García, David; von Delft, Jan; Solano, Enrique

    2009-08-01

    We demonstrate how the matrix-product state formalism provides a flexible structure to solve the constrained optimization problem associated with the sequential generation of entangled multiqubit states under experimental restrictions. We consider a realistic scenario in which an ancillary system with a limited number of levels performs restricted sequential interactions with qubits in a row. The proposed method relies on a suitable local optimization procedure, yielding an efficient recipe for the realistic and approximate sequential generation of any entangled multiqubit state. We give paradigmatic examples that may be of interest for theoretical and experimental developments.

  2. Absolute Stability Analysis of a Phase Plane Controlled Spacecraft

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Plummer, Michael; Bedrossian, Nazareth; Hall, Charles; Jackson, Mark; Spanos, Pol

    2010-01-01

    Many aerospace attitude control systems utilize phase plane control schemes that include nonlinear elements such as dead zone and ideal relay. To evaluate phase plane control robustness, stability margin prediction methods must be developed. Absolute stability is extended to predict stability margins and to define an abort condition. A constrained optimization approach is also used to design flex filters for roll control. The design goal is to optimize vehicle tracking performance while maintaining adequate stability margins. Absolute stability is shown to provide satisfactory stability constraints for the optimization.

  3. GeMS: an advanced software package for designing synthetic genes.

    PubMed

    Jayaraj, Sebastian; Reid, Ralph; Santi, Daniel V

    2005-01-01

    A user-friendly, advanced software package for gene design is described. The software comprises an integrated suite of programs-also provided as stand-alone tools-that automatically performs the following tasks in gene design: restriction site prediction, codon optimization for any expression host, restriction site inclusion and exclusion, separation of long sequences into synthesizable fragments, T(m) and stem-loop determinations, optimal oligonucleotide component design and design verification/error-checking. The output is a complete design report and a list of optimized oligonucleotides to be prepared for subsequent gene synthesis. The user interface accommodates both inexperienced and experienced users. For inexperienced users, explanatory notes are provided such that detailed instructions are not necessary; for experienced users, a streamlined interface is provided without such notes. The software has been extensively tested in the design and successful synthesis of over 400 kb of genes, many of which exceeded 5 kb in length.

  4. No-go theorem for iterations of unknown quantum gates

    NASA Astrophysics Data System (ADS)

    Soleimanifar, Mehdi; Karimipour, Vahid

    2016-01-01

    We propose a no-go theorem by proving the impossibility of constructing a deterministic quantum circuit that iterates a unitary oracle by calling it only once. Different schemes are provided to bypass this result and to approximately realize the iteration. The optimal scheme is also studied. An interesting observation is that for a large number of iterations, a trivial strategy like using the identity channel has the optimal performance, and preprocessing, postprocessing, or using resources like entanglement does not help at all. Intriguingly, the number of iterations, when being large enough, does not affect the performance of the proposed schemes.

  5. A Simulation Modeling Approach Method Focused on the Refrigerated Warehouses Using Design of Experiment

    NASA Astrophysics Data System (ADS)

    Cho, G. S.

    2017-09-01

    For performance optimization of Refrigerated Warehouses, design parameters are selected based on the physical parameters such as number of equipment and aisles, speeds of forklift for ease of modification. This paper provides a comprehensive framework approach for the system design of Refrigerated Warehouses. We propose a modeling approach which aims at the simulation optimization so as to meet required design specifications using the Design of Experiment (DOE) and analyze a simulation model using integrated aspect-oriented modeling approach (i-AOMA). As a result, this suggested method can evaluate the performance of a variety of Refrigerated Warehouses operations.

  6. Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO

    NASA Technical Reports Server (NTRS)

    Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.

    2016-01-01

    A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.

  7. Optimal firing rate estimation

    NASA Technical Reports Server (NTRS)

    Paulin, M. G.; Hoffman, L. F.

    2001-01-01

    We define a measure for evaluating the quality of a predictive model of the behavior of a spiking neuron. This measure, information gain per spike (Is), indicates how much more information is provided by the model than if the prediction were made by specifying the neuron's average firing rate over the same time period. We apply a maximum Is criterion to optimize the performance of Gaussian smoothing filters for estimating neural firing rates. With data from bullfrog vestibular semicircular canal neurons and data from simulated integrate-and-fire neurons, the optimal bandwidth for firing rate estimation is typically similar to the average firing rate. Precise timing and average rate models are limiting cases that perform poorly. We estimate that bullfrog semicircular canal sensory neurons transmit in the order of 1 bit of stimulus-related information per spike.

  8. A computational fluid dynamics simulation framework for ventricular catheter design optimization.

    PubMed

    Weisenberg, Sofy H; TerMaath, Stephanie C; Barbier, Charlotte N; Hill, Judith C; Killeffer, James A

    2017-11-10

    OBJECTIVE Cerebrospinal fluid (CSF) shunts are the primary treatment for patients suffering from hydrocephalus. While proven effective in symptom relief, these shunt systems are plagued by high failure rates and often require repeated revision surgeries to replace malfunctioning components. One of the leading causes of CSF shunt failure is obstruction of the ventricular catheter by aggregations of cells, proteins, blood clots, or fronds of choroid plexus that occlude the catheter's small inlet holes or even the full internal catheter lumen. Such obstructions can disrupt CSF diversion out of the ventricular system or impede it entirely. Previous studies have suggested that altering the catheter's fluid dynamics may help to reduce the likelihood of complete ventricular catheter failure caused by obstruction. However, systematic correlation between a ventricular catheter's design parameters and its performance, specifically its likelihood to become occluded, still remains unknown. Therefore, an automated, open-source computational fluid dynamics (CFD) simulation framework was developed for use in the medical community to determine optimized ventricular catheter designs and to rapidly explore parameter influence for a given flow objective. METHODS The computational framework was developed by coupling a 3D CFD solver and an iterative optimization algorithm and was implemented in a high-performance computing environment. The capabilities of the framework were demonstrated by computing an optimized ventricular catheter design that provides uniform flow rates through the catheter's inlet holes, a common design objective in the literature. The baseline computational model was validated using 3D nuclear imaging to provide flow velocities at the inlet holes and through the catheter. RESULTS The optimized catheter design achieved through use of the automated simulation framework improved significantly on previous attempts to reach a uniform inlet flow rate distribution using the standard catheter hole configuration as a baseline. While the standard ventricular catheter design featuring uniform inlet hole diameters and hole spacing has a standard deviation of 14.27% for the inlet flow rates, the optimized design has a standard deviation of 0.30%. CONCLUSIONS This customizable framework, paired with high-performance computing, provides a rapid method of design testing to solve complex flow problems. While a relatively simplified ventricular catheter model was used to demonstrate the framework, the computational approach is applicable to any baseline catheter model, and it is easily adapted to optimize catheters for the unique needs of different patients as well as for other fluid-based medical devices.

  9. Optimal projection method determination by Logdet Divergence and perturbed von-Neumann Divergence.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Qiu, Yushan; Cheng, Xiao-Qing

    2017-12-14

    Positive semi-definiteness is a critical property in kernel methods for Support Vector Machine (SVM) by which efficient solutions can be guaranteed through convex quadratic programming. However, a lot of similarity functions in applications do not produce positive semi-definite kernels. We propose projection method by constructing projection matrix on indefinite kernels. As a generalization of the spectrum method (denoising method and flipping method), the projection method shows better or comparable performance comparing to the corresponding indefinite kernel methods on a number of real world data sets. Under the Bregman matrix divergence theory, we can find suggested optimal λ in projection method using unconstrained optimization in kernel learning. In this paper we focus on optimal λ determination, in the pursuit of precise optimal λ determination method in unconstrained optimization framework. We developed a perturbed von-Neumann divergence to measure kernel relationships. We compared optimal λ determination with Logdet Divergence and perturbed von-Neumann Divergence, aiming at finding better λ in projection method. Results on a number of real world data sets show that projection method with optimal λ by Logdet divergence demonstrate near optimal performance. And the perturbed von-Neumann Divergence can help determine a relatively better optimal projection method. Projection method ia easy to use for dealing with indefinite kernels. And the parameter embedded in the method can be determined through unconstrained optimization under Bregman matrix divergence theory. This may provide a new way in kernel SVMs for varied objectives.

  10. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  11. The Effect of Aerodynamic Evaluators on the Multi-Objective Optimization of Flatback Airfoils

    NASA Astrophysics Data System (ADS)

    Miller, M.; Slew, K. Lee; Matida, E.

    2016-09-01

    With the long lengths of today's wind turbine rotor blades, there is a need to reduce the mass, thereby requiring stiffer airfoils, while maintaining the aerodynamic efficiency of the airfoils, particularly in the inboard region of the blade where structural demands are highest. Using a genetic algorithm, the multi-objective aero-structural optimization of 30% thick flatback airfoils was systematically performed for a variety of aerodynamic evaluators such as lift-to-drag ratio (Cl/Cd), torque (Ct), and torque-to-thrust ratio (Ct/Cn) to determine their influence on airfoil shape and performance. The airfoil optimized for Ct possessed a 4.8% thick trailing-edge, and a rather blunt leading-edge region which creates high levels of lift and correspondingly, drag. It's ability to maintain similar levels of lift and drag under forced transition conditions proved it's insensitivity to roughness. The airfoil optimized for Cl/Cd displayed relatively poor insensitivity to roughness due to the rather aft-located free transition points. The Ct/Cn optimized airfoil was found to have a very similar shape to that of the Cl/Cd airfoil, with a slightly more blunt leading-edge which aided in providing higher levels of lift and moderate insensitivity to roughness. The influence of the chosen aerodynamic evaluator under the specified conditions and constraints in the optimization of wind turbine airfoils is shown to have a direct impact on the airfoil shape and performance.

  12. Design and Optimization of Composite Gyroscope Momentum Wheel Rings

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2007-01-01

    Stress analysis and preliminary design/optimization procedures are presented for gyroscope momentum wheel rings composed of metallic, metal matrix composite, and polymer matrix composite materials. The design of these components involves simultaneously minimizing both true part volume and mass, while maximizing angular momentum. The stress analysis results are combined with an anisotropic failure criterion to formulate a new sizing procedure that provides considerable insight into the design of gyroscope momentum wheel ring components. Results compare the performance of two optimized metallic designs, an optimized SiC/Ti composite design, and an optimized graphite/epoxy composite design. The graphite/epoxy design appears to be far superior to the competitors considered unless a much greater premium is placed on volume efficiency compared to mass efficiency.

  13. An effective model for ergonomic optimization applied to a new automotive assembly line

    NASA Astrophysics Data System (ADS)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  14. Population-based metaheuristic optimization in neutron optics and shielding design

    NASA Astrophysics Data System (ADS)

    DiJulio, D. D.; Björgvinsdóttir, H.; Zendler, C.; Bentley, P. M.

    2016-11-01

    Population-based metaheuristic algorithms are powerful tools in the design of neutron scattering instruments and the use of these types of algorithms for this purpose is becoming more and more commonplace. Today there exists a wide range of algorithms to choose from when designing an instrument and it is not always initially clear which may provide the best performance. Furthermore, due to the nature of these types of algorithms, the final solution found for a specific design scenario cannot always be guaranteed to be the global optimum. Therefore, to explore the potential benefits and differences between the varieties of these algorithms available, when applied to such design scenarios, we have carried out a detailed study of some commonly used algorithms. For this purpose, we have developed a new general optimization software package which combines a number of common metaheuristic algorithms within a single user interface and is designed specifically with neutronic calculations in mind. The algorithms included in the software are implementations of Particle-Swarm Optimization (PSO), Differential Evolution (DE), Artificial Bee Colony (ABC), and a Genetic Algorithm (GA). The software has been used to optimize the design of several problems in neutron optics and shielding, coupled with Monte-Carlo simulations, in order to evaluate the performance of the various algorithms. Generally, the performance of the algorithms depended on the specific scenarios, however it was found that DE provided the best average solutions in all scenarios investigated in this work.

  15. Magic in the machine: a computational magician's assistant

    PubMed Central

    Williams, Howard; McOwan, Peter W.

    2014-01-01

    A human magician blends science, psychology, and performance to create a magical effect. In this paper we explore what can be achieved when that human intelligence is replaced or assisted by machine intelligence. Magical effects are all in some form based on hidden mathematical, scientific, or psychological principles; often the parameters controlling these underpinning techniques are hard for a magician to blend to maximize the magical effect required. The complexity is often caused by interacting and often conflicting physical and psychological constraints that need to be optimally balanced. Normally this tuning is done by trial and error, combined with human intuitions. Here we focus on applying Artificial Intelligence methods to the creation and optimization of magic tricks exploiting mathematical principles. We use experimentally derived data about particular perceptual and cognitive features, combined with a model of the underlying mathematical process to provide a psychologically valid metric to allow optimization of magical impact. In the paper we introduce our optimization methodology and describe how it can be flexibly applied to a range of different types of mathematics based tricks. We also provide two case studies as exemplars of the methodology at work: a magical jigsaw, and a mind reading card trick effect. We evaluate each trick created through testing in laboratory and public performances, and further demonstrate the real world efficacy of our approach for professional performers through sales of the tricks in a reputable magic shop in London. PMID:25452736

  16. The optimal design of service level agreement in IAAS based on BDIM

    NASA Astrophysics Data System (ADS)

    Liu, Xiaochen; Zhan, Zhiqiang

    2013-03-01

    Cloud Computing has become more and more prevalent over the past few years, and we have seen the importance of Infrastructure-as-a-service (IaaS). This kind of service enables scaling of bandwidth, memory, computing power and storage. But the SLA in IaaS also faces complexity and variety. Users also consider the business of the service. To meet the most users requirements, a methodology for designing optimal SLA in IaaS from the business perspectives is proposed. This method is different from the conventional SLA design method, It not only focuses on service provider perspective, also from the customer to carry on the design. This methodology better captures the linkage between service provider and service client by considering minimizing the business loss originated from performance degradation and IT infrastructure failures and maximizing profits for service provider and clients. An optimal design in an IaaS model is provided and an example are analyzed to show this approach obtain higher profit.

  17. Global Design Optimization for Aerodynamics and Rocket Propulsion Components

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)

    2000-01-01

    Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design practices and the need for future research are identified.

  18. SPECT System Optimization Against A Discrete Parameter Space

    PubMed Central

    Meng, L. J.; Li, N.

    2013-01-01

    In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609

  19. Design optimization of single mixed refrigerant LNG process using a hybrid modified coordinate descent algorithm

    NASA Astrophysics Data System (ADS)

    Qyyum, Muhammad Abdul; Long, Nguyen Van Duc; Minh, Le Quang; Lee, Moonyong

    2018-01-01

    Design optimization of the single mixed refrigerant (SMR) natural gas liquefaction (LNG) process involves highly non-linear interactions between decision variables, constraints, and the objective function. These non-linear interactions lead to an irreversibility, which deteriorates the energy efficiency of the LNG process. In this study, a simple and highly efficient hybrid modified coordinate descent (HMCD) algorithm was proposed to cope with the optimization of the natural gas liquefaction process. The single mixed refrigerant process was modeled in Aspen Hysys® and then connected to a Microsoft Visual Studio environment. The proposed optimization algorithm provided an improved result compared to the other existing methodologies to find the optimal condition of the complex mixed refrigerant natural gas liquefaction process. By applying the proposed optimization algorithm, the SMR process can be designed with the 0.2555 kW specific compression power which is equivalent to 44.3% energy saving as compared to the base case. Furthermore, in terms of coefficient of performance (COP), it can be enhanced up to 34.7% as compared to the base case. The proposed optimization algorithm provides a deep understanding of the optimization of the liquefaction process in both technical and numerical perspectives. In addition, the HMCD algorithm can be employed to any mixed refrigerant based liquefaction process in the natural gas industry.

  20. Optimal orientation in flows: providing a benchmark for animal movement strategies.

    PubMed

    McLaren, James D; Shamoun-Baranes, Judy; Dokter, Adriaan M; Klaassen, Raymond H G; Bouten, Willem

    2014-10-06

    Animal movements in air and water can be strongly affected by experienced flow. While various flow-orientation strategies have been proposed and observed, their performance in variable flow conditions remains unclear. We apply control theory to establish a benchmark for time-minimizing (optimal) orientation. We then define optimal orientation for movement in steady flow patterns and, using dynamic wind data, for short-distance mass movements of thrushes (Turdus sp.) and 6000 km non-stop migratory flights by great snipes, Gallinago media. Relative to the optimal benchmark, we assess the efficiency (travel speed) and reliability (success rate) of three generic orientation strategies: full compensation for lateral drift, vector orientation (single-heading movement) and goal orientation (continually heading towards the goal). Optimal orientation is characterized by detours to regions of high flow support, especially when flow speeds approach and exceed the animal's self-propelled speed. In strong predictable flow (short distance thrush flights), vector orientation adjusted to flow on departure is nearly optimal, whereas for unpredictable flow (inter-continental snipe flights), only goal orientation was near-optimally reliable and efficient. Optimal orientation provides a benchmark for assessing efficiency of responses to complex flow conditions, thereby offering insight into adaptive flow-orientation across taxa in the light of flow strength, predictability and navigation capacity.

  1. Optimal orientation in flows: providing a benchmark for animal movement strategies

    PubMed Central

    McLaren, James D.; Shamoun-Baranes, Judy; Dokter, Adriaan M.; Klaassen, Raymond H. G.; Bouten, Willem

    2014-01-01

    Animal movements in air and water can be strongly affected by experienced flow. While various flow-orientation strategies have been proposed and observed, their performance in variable flow conditions remains unclear. We apply control theory to establish a benchmark for time-minimizing (optimal) orientation. We then define optimal orientation for movement in steady flow patterns and, using dynamic wind data, for short-distance mass movements of thrushes (Turdus sp.) and 6000 km non-stop migratory flights by great snipes, Gallinago media. Relative to the optimal benchmark, we assess the efficiency (travel speed) and reliability (success rate) of three generic orientation strategies: full compensation for lateral drift, vector orientation (single-heading movement) and goal orientation (continually heading towards the goal). Optimal orientation is characterized by detours to regions of high flow support, especially when flow speeds approach and exceed the animal's self-propelled speed. In strong predictable flow (short distance thrush flights), vector orientation adjusted to flow on departure is nearly optimal, whereas for unpredictable flow (inter-continental snipe flights), only goal orientation was near-optimally reliable and efficient. Optimal orientation provides a benchmark for assessing efficiency of responses to complex flow conditions, thereby offering insight into adaptive flow-orientation across taxa in the light of flow strength, predictability and navigation capacity. PMID:25056213

  2. Exhaust emission reduction for intermittent combustion aircraft engines

    NASA Technical Reports Server (NTRS)

    Moffett, R. N.

    1979-01-01

    Three concepts for optimizing the performance, increasing the fuel economy, and reducing exhaust emission of the piston aircraft engine were investigated. High energy-multiple spark discharge and spark plug tip penetration, ultrasonic fuel vaporization, and variable valve timing were evaluated individually. Ultrasonic fuel vaporization did not demonstrate sufficient improvement in distribution to offset the performance loss caused by the additional manifold restriction. High energy ignition and revised spark plug tip location provided no change in performance or emissions. Variable valve timing provided some performance benefit; however, even greater performance improvement was obtained through induction system tuning which could be accomplished with far less complexity.

  3. Nutritional Supplements for Strength Power Athletes

    NASA Astrophysics Data System (ADS)

    Wilborn, Colin

    Over the last decade research involving nutritional supplementation and sport performance has increased substantially. Strength and power athletes have specific needs to optimize their performance. Nutritional supplementation cannot be viewed as a replacement for a balanced diet but as an important addition to it. However, diet and supplementation are not mutually exclusive, nor does one depend on the other. Strength and power athletes have four general areas of supplementation needs. First, strength athletes need supplements that have a direct effect on performance. The second group of supplements includes those that promote recovery. The third group comprises the supplements that enhance immune function. The last group of supplements includes those that provide energy or have a direct effect on the workout. This chapter reviews the key supplements needed to optimize the performance and training of the strength athlete.

  4. Robust control of flexible space vehicles with minimum structural excitation: On-off pulse control of flexible space vehicles

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Liu, Qiang

    1992-01-01

    Both feedback and feedforward control approaches for uncertain dynamical systems (in particular, with uncertainty in structural mode frequency) are investigated. The control objective is to achieve a fast settling time (high performance) and robustness (insensitivity) to plant uncertainty. Preshaping of an ideal, time optimal control input using a tapped-delay filter is shown to provide a fast settling time with robust performance. A robust, non-minimum-phase feedback controller is synthesized with particular emphasis on its proper implementation for a non-zero set-point control problem. It is shown that a properly designed, feedback controller performs well, as compared with a time optimal open loop controller with special preshaping for performance robustness. Also included are two separate papers by the same authors on this subject.

  5. Cross Layer Design for Optimizing Transmission Reliability, Energy Efficiency, and Lifetime in Body Sensor Networks.

    PubMed

    Chen, Xi; Xu, Yixuan; Liu, Anfeng

    2017-04-19

    High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%.

  6. Cross Layer Design for Optimizing Transmission Reliability, Energy Efficiency, and Lifetime in Body Sensor Networks

    PubMed Central

    Chen, Xi; Xu, Yixuan; Liu, Anfeng

    2017-01-01

    High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs). However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%. PMID:28422062

  7. Harnessing Diversity towards the Reconstructing of Large Scale Gene Regulatory Networks

    PubMed Central

    Yamanaka, Ryota; Kitano, Hiroaki

    2013-01-01

    Elucidating gene regulatory network (GRN) from large scale experimental data remains a central challenge in systems biology. Recently, numerous techniques, particularly consensus driven approaches combining different algorithms, have become a potentially promising strategy to infer accurate GRNs. Here, we develop a novel consensus inference algorithm, TopkNet that can integrate multiple algorithms to infer GRNs. Comprehensive performance benchmarking on a cloud computing framework demonstrated that (i) a simple strategy to combine many algorithms does not always lead to performance improvement compared to the cost of consensus and (ii) TopkNet integrating only high-performance algorithms provide significant performance improvement compared to the best individual algorithms and community prediction. These results suggest that a priori determination of high-performance algorithms is a key to reconstruct an unknown regulatory network. Similarity among gene-expression datasets can be useful to determine potential optimal algorithms for reconstruction of unknown regulatory networks, i.e., if expression-data associated with known regulatory network is similar to that with unknown regulatory network, optimal algorithms determined for the known regulatory network can be repurposed to infer the unknown regulatory network. Based on this observation, we developed a quantitative measure of similarity among gene-expression datasets and demonstrated that, if similarity between the two expression datasets is high, TopkNet integrating algorithms that are optimal for known dataset perform well on the unknown dataset. The consensus framework, TopkNet, together with the similarity measure proposed in this study provides a powerful strategy towards harnessing the wisdom of the crowds in reconstruction of unknown regulatory networks. PMID:24278007

  8. Does unbelted safety requirement affect protection for belted occupants?

    PubMed

    Hu, Jingwen; Klinich, Kathleen D; Manary, Miriam A; Flannagan, Carol A C; Narayanaswamy, Prabha; Reed, Matthew P; Andreen, Margaret; Neal, Mark; Lin, Chin-Hsu

    2017-05-29

    Federal regulations in the United States require vehicles to meet occupant performance requirements with unbelted test dummies. Removing the test requirements with unbelted occupants might encourage the deployment of seat belt interlocks and allow restraint optimization to focus on belted occupants. The objective of this study is to compare the performance of restraint systems optimized for belted-only occupants with those optimized for both belted and unbelted occupants using computer simulations and field crash data analyses. In this study, 2 validated finite element (FE) vehicle/occupant models (a midsize sedan and a midsize SUV) were selected. Restraint design optimizations under standardized crash conditions (U.S.-NCAP and FMVSS 208) with and without unbelted requirements were conducted using Hybrid III (HIII) small female and midsize male anthropomorphic test devices (ATDs) in both vehicles on both driver and right front passenger positions. A total of 10 to 12 design parameters were varied in each optimization using a combination of response surface method (RSM) and genetic algorithm. To evaluate the field performance of restraints optimized with and without unbelted requirements, 55 frontal crash conditions covering a greater variety of crash types than those in the standardized crashes were selected. A total of 1,760 FE simulations were conducted for the field performance evaluation. Frontal crashes in the NASS-CDS database from 2002 to 2012 were used to develop injury risk curves and to provide the baseline performance of current restraint system and estimate the injury risk change by removing the unbelted requirement. Unbelted requirements do not affect the optimal seat belt and airbag design parameters in 3 out of 4 vehicle/occupant position conditions, except for the SUV passenger side. Overall, compared to the optimal designs with unbelted requirements, optimal designs without unbelted requirements generated the same or lower total injury risks for belted occupants depending on statistical methods used for the analysis, but they could also increase the total injury risks for unbelted occupants. This study demonstrated potential for reducing injury risks to belted occupants if the unbelted requirements are eliminated. Further investigations are necessary to confirm these findings.

  9. A novel channel selection method for optimal classification in different motor imagery BCI paradigms.

    PubMed

    Shan, Haijun; Xu, Haojie; Zhu, Shanan; He, Bin

    2015-10-21

    For sensorimotor rhythms based brain-computer interface (BCI) systems, classification of different motor imageries (MIs) remains a crucial problem. An important aspect is how many scalp electrodes (channels) should be used in order to reach optimal performance classifying motor imaginations. While the previous researches on channel selection mainly focus on MI tasks paradigms without feedback, the present work aims to investigate the optimal channel selection in MI tasks paradigms with real-time feedback (two-class control and four-class control paradigms). In the present study, three datasets respectively recorded from MI tasks experiment, two-class control and four-class control experiments were analyzed offline. Multiple frequency-spatial synthesized features were comprehensively extracted from every channel, and a new enhanced method IterRelCen was proposed to perform channel selection. IterRelCen was constructed based on Relief algorithm, but was enhanced from two aspects: change of target sample selection strategy and adoption of the idea of iterative computation, and thus performed more robust in feature selection. Finally, a multiclass support vector machine was applied as the classifier. The least number of channels that yield the best classification accuracy were considered as the optimal channels. One-way ANOVA was employed to test the significance of performance improvement among using optimal channels, all the channels and three typical MI channels (C3, C4, Cz). The results show that the proposed method outperformed other channel selection methods by achieving average classification accuracies of 85.2, 94.1, and 83.2 % for the three datasets, respectively. Moreover, the channel selection results reveal that the average numbers of optimal channels were significantly different among the three MI paradigms. It is demonstrated that IterRelCen has a strong ability for feature selection. In addition, the results have shown that the numbers of optimal channels in the three different motor imagery BCI paradigms are distinct. From a MI task paradigm, to a two-class control paradigm, and to a four-class control paradigm, the number of required channels for optimizing the classification accuracy increased. These findings may provide useful information to optimize EEG based BCI systems, and further improve the performance of noninvasive BCI.

  10. All-Polymer Solar Cell Performance Optimized via Systematic Molecular Weight Tuning of Both Donor and Acceptor Polymers.

    PubMed

    Zhou, Nanjia; Dudnik, Alexander S; Li, Ting I N G; Manley, Eric F; Aldrich, Thomas J; Guo, Peijun; Liao, Hsueh-Chung; Chen, Zhihua; Chen, Lin X; Chang, Robert P H; Facchetti, Antonio; Olvera de la Cruz, Monica; Marks, Tobin J

    2016-02-03

    The influence of the number-average molecular weight (Mn) on the blend film morphology and photovoltaic performance of all-polymer solar cells (APSCs) fabricated with the donor polymer poly[5-(2-hexyldodecyl)-1,3-thieno[3,4-c]pyrrole-4,6-dione-alt-5,5-(2,5-bis(3-dodecylthiophen-2-yl)thiophene)] (PTPD3T) and acceptor polymer poly{[N,N'-bis(2-octyldodecyl)naphthalene-1,4,5,8-bis(dicarboximide)-2,6-diyl]-alt-5,5'-(2,2'-bithiophene)} (P(NDI2OD-T2); N2200) is systematically investigated. The Mn effect analysis of both PTPD3T and N2200 is enabled by implementing a polymerization strategy which produces conjugated polymers with tunable Mns. Experimental and coarse-grain modeling results reveal that systematic Mn variation greatly influences both intrachain and interchain interactions and ultimately the degree of phase separation and morphology evolution. Specifically, increasing Mn for both polymers shrinks blend film domain sizes and enhances donor-acceptor polymer-polymer interfacial areas, affording increased short-circuit current densities (Jsc). However, the greater disorder and intermixed feature proliferation accompanying increasing Mn promotes charge carrier recombination, reducing cell fill factors (FF). The optimized photoactive layers exhibit well-balanced exciton dissociation and charge transport characteristics, ultimately providing solar cells with a 2-fold PCE enhancement versus devices with nonoptimal Mns. Overall, it is shown that proper and precise tuning of both donor and acceptor polymer Mns is critical for optimizing APSC performance. In contrast to reports where maximum power conversion efficiencies (PCEs) are achieved for the highest Mns, the present two-dimensional Mn optimization matrix strategy locates a PCE "sweet spot" at intermediate Mns of both donor and acceptor polymers. This study provides synthetic methodologies to predictably access conjugated polymers with desired Mn and highlights the importance of optimizing Mn for both polymer components to realize the full potential of APSC performance.

  11. All-Polymer Solar Cell Performance Optimized via Systematic Molecular Weight Tuning of Both Donor and Acceptor Polymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Nanjia; Dudnik, Alexander S.; Li, Ting I. N. G.

    2016-01-21

    The influence of the number-average molecular weight (Mn) on the blend film morphology and photovoltaic performance of all-polymer solar cells (APSCs) fabricated with the donor polymer poly[5-(2-hexyldodecyl)-1,3-thieno[3,4-c]pyrrole-4,6-dione-alt-5,5-(2,5-bis(3-dodecylthiophen-2-yl)thiophene)] (PTPD3T) and acceptor polymer poly{[N,N'-bis(2-octyldodecyl)naphthalene-1,4,5,8-bis(dicarboximide)-2,6-diyl]-alt-5,5'-(2,2'-bithiophene)} (P(NDI2OD-T2); N2200) is systematically investigated. The Mn effect analysis of both PTPD3T and N2200 is enabled by implementing a polymerization strategy which produces conjugated polymers with tunable Mns. Experimental and coarse-grain modeling results reveal that systematic Mn variation greatly influences both intrachain and interchain interactions and ultimately the degree of phase separation and morphology evolution. Specifically, increasing Mn for both polymers shrinks blend film domain sizes and enhancesmore » donor–acceptor polymer–polymer interfacial areas, affording increased short-circuit current densities (Jsc). However, the greater disorder and intermixed feature proliferation accompanying increasing Mn promotes charge carrier recombination, reducing cell fill factors (FF). The optimized photoactive layers exhibit well-balanced exciton dissociation and charge transport characteristics, ultimately providing solar cells with a 2-fold PCE enhancement versus devices with nonoptimal Mns. Overall, it is shown that proper and precise tuning of both donor and acceptor polymer Mns is critical for optimizing APSC performance. In contrast to reports where maximum power conversion efficiencies (PCEs) are achieved for the highest Mns, the present two-dimensional Mn optimization matrix strategy locates a PCE “sweet spot” at intermediate Mns of both donor and acceptor polymers. This study provides synthetic methodologies to predictably access conjugated polymers with desired Mn and highlights the importance of optimizing Mn for both polymer components to realize the full potential of APSC performance.« less

  12. Performance of Optimized Actuator and Sensor Arrays in an Active Noise Control System

    NASA Technical Reports Server (NTRS)

    Palumbo, D. L.; Padula, S. L.; Lyle, K. H.; Cline, J. H.; Cabell, R. H.

    1996-01-01

    Experiments have been conducted in NASA Langley's Acoustics and Dynamics Laboratory to determine the effectiveness of optimized actuator/sensor architectures and controller algorithms for active control of harmonic interior noise. Tests were conducted in a large scale fuselage model - a composite cylinder which simulates a commuter class aircraft fuselage with three sections of trim panel and a floor. Using an optimization technique based on the component transfer functions, combinations of 4 out of 8 piezoceramic actuators and 8 out of 462 microphone locations were evaluated against predicted performance. A combinatorial optimization technique called tabu search was employed to select the optimum transducer arrays. Three test frequencies represent the cases of a strong acoustic and strong structural response, a weak acoustic and strong structural response and a strong acoustic and weak structural response. Noise reduction was obtained using a Time Averaged/Gradient Descent (TAGD) controller. Results indicate that the optimization technique successfully predicted best and worst case performance. An enhancement of the TAGD control algorithm was also evaluated. The principal components of the actuator/sensor transfer functions were used in the PC-TAGD controller. The principal components are shown to be independent of each other while providing control as effective as the standard TAGD.

  13. Set covering algorithm, a subprogram of the scheduling algorithm for mission planning and logistic evaluation

    NASA Technical Reports Server (NTRS)

    Chang, H.

    1976-01-01

    A computer program using Lemke, Salkin and Spielberg's Set Covering Algorithm (SCA) to optimize a traffic model problem in the Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE) was documented. SCA forms a submodule of SAMPLE and provides for input and output, subroutines, and an interactive feature for performing the optimization and arranging the results in a readily understandable form for output.

  14. Water cycle algorithm: A detailed standard code

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Eskandar, Hadi; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon

    Inspired by the observation of the water cycle process and movements of rivers and streams toward the sea, a population-based metaheuristic algorithm, the water cycle algorithm (WCA) has recently been proposed. Lately, an increasing number of WCA applications have appeared and the WCA has been utilized in different optimization fields. This paper provides detailed open source code for the WCA, of which the performance and efficiency has been demonstrated for solving optimization problems. The WCA has an interesting and simple concept and this paper aims to use its source code to provide a step-by-step explanation of the process it follows.

  15. Long working distance objective lenses for single atom trapping and imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritchard, J. D., E-mail: jonathan.pritchard@strath.ac.uk; Department of Physics, University of Strathclyde, 107 Rottenrow East, Glasgow G4 0NG; Isaacs, J. A.

    We present a pair of optimized objective lenses with long working distances of 117 mm and 65 mm, respectively, that offer diffraction limited performance for both Cs and Rb wavelengths when imaging through standard vacuum windows. The designs utilise standard catalog lens elements to provide a simple and cost-effective solution. Objective 1 provides NA = 0.175 offering 3 μm resolution whilst objective 2 is optimized for high collection efficiency with NA = 0.29 and 1.8 μm resolution. This flexible design can be further extended for use at shorter wavelengths by simply re-optimising the lens separations.

  16. Design optimization of tailor-rolled blank thin-walled structures based on ɛ-support vector regression technique and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Duan, Libin; Xiao, Ning-cong; Li, Guangyao; Cheng, Aiguo; Chen, Tao

    2017-07-01

    Tailor-rolled blank thin-walled (TRB-TH) structures have become important vehicle components owing to their advantages of light weight and crashworthiness. The purpose of this article is to provide an efficient lightweight design for improving the energy-absorbing capability of TRB-TH structures under dynamic loading. A finite element (FE) model for TRB-TH structures is established and validated by performing a dynamic axial crash test. Different material properties for individual parts with different thicknesses are considered in the FE model. Then, a multi-objective crashworthiness design of the TRB-TH structure is constructed based on the ɛ-support vector regression (ɛ-SVR) technique and non-dominated sorting genetic algorithm-II. The key parameters (C, ɛ and σ) are optimized to further improve the predictive accuracy of ɛ-SVR under limited sample points. Finally, the technique for order preference by similarity to the ideal solution method is used to rank the solutions in Pareto-optimal frontiers and find the best compromise optima. The results demonstrate that the light weight and crashworthiness performance of the optimized TRB-TH structures are superior to their uniform thickness counterparts. The proposed approach provides useful guidance for designing TRB-TH energy absorbers for vehicle bodies.

  17. Task-Driven Optimization of Fluence Field and Regularization for Model-Based Iterative Reconstruction in Computed Tomography.

    PubMed

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2017-12-01

    This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.

  18. Aerodynamic optimization of wind turbine rotor using CFD/AD method

    NASA Astrophysics Data System (ADS)

    Cao, Jiufa; Zhu, Weijun; Wang, Tongguang; Ke, Shitang

    2018-05-01

    The current work describes a novel technique for wind turbine rotor optimization. The aerodynamic design and optimization of wind turbine rotor can be achieved with different methods, such as the semi-empirical engineering methods and more accurate computational fluid dynamic (CFD) method. The CFD method often provides more detailed aerodynamics features during the design process. However, high computational cost limits the application, especially for rotor optimization purpose. In this paper, a CFD-based actuator disc (AD) model is used to represent turbulent flow over a wind turbine rotor. The rotor is modeled as a permeable disc of equivalent area where the forces from the blades are distributed on the circular disc. The AD model is coupled with a Reynolds Averaged Navier-Stokes (RANS) solver such that the thrust and power are simulated. The design variables are the shape parameters comprising the chord, the twist and the relative thickness of the wind turbine rotor blade. The comparative aerodynamic performance is analyzed between the original and optimized reference wind turbine rotor. The results showed that the optimization framework can be effectively and accurately utilized in enhancing the aerodynamic performance of the wind turbine rotor.

  19. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  20. Genetic particle swarm parallel algorithm analysis of optimization arrangement on mistuned blades

    NASA Astrophysics Data System (ADS)

    Zhao, Tianyu; Yuan, Huiqun; Yang, Wenjun; Sun, Huagang

    2017-12-01

    This article introduces a method of mistuned parameter identification which consists of static frequency testing of blades, dichotomy and finite element analysis. A lumped parameter model of an engine bladed-disc system is then set up. A bladed arrangement optimization method, namely the genetic particle swarm optimization algorithm, is presented. It consists of a discrete particle swarm optimization and a genetic algorithm. From this, the local and global search ability is introduced. CUDA-based co-evolution particle swarm optimization, using a graphics processing unit, is presented and its performance is analysed. The results show that using optimization results can reduce the amplitude and localization of the forced vibration response of a bladed-disc system, while optimization based on the CUDA framework can improve the computing speed. This method could provide support for engineering applications in terms of effectiveness and efficiency.

  1. An Optimization Framework for Dynamic Hybrid Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenbo Du; Humberto E Garcia; Christiaan J.J. Paredis

    A computational framework for the efficient analysis and optimization of dynamic hybrid energy systems (HES) is developed. A microgrid system with multiple inputs and multiple outputs (MIMO) is modeled using the Modelica language in the Dymola environment. The optimization loop is implemented in MATLAB, with the FMI Toolbox serving as the interface between the computational platforms. Two characteristic optimization problems are selected to demonstrate the methodology and gain insight into the system performance. The first is an unconstrained optimization problem that optimizes the dynamic properties of the battery, reactor and generator to minimize variability in the HES. The second problemmore » takes operating and capital costs into consideration by imposing linear and nonlinear constraints on the design variables. The preliminary optimization results obtained in this study provide an essential step towards the development of a comprehensive framework for designing HES.« less

  2. Loss-resistant unambiguous phase measurement

    NASA Astrophysics Data System (ADS)

    Dinani, Hossein T.; Berry, Dominic W.

    2014-08-01

    Entangled multiphoton states have the potential to provide improved measurement accuracy, but are sensitive to photon loss. It is possible to calculate ideal loss-resistant states that maximize the Fisher information, but it is unclear how these could be experimentally generated. Here we propose a set of states that can be obtained by processing the output from parametric down-conversion. Although these states are not optimal, they provide performance very close to that of optimal states for a range of parameters. Moreover, we show how to use sequences of such states in order to obtain an unambiguous phase measurement that beats the standard quantum limit. We consider the optimization of parameters in order to minimize the final phase variance, and find that the optimum parameters are different from those that maximize the Fisher information.

  3. Optimal Regulation of Virtual Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall Anese, Emiliano; Guggilam, Swaroop S.; Simonetto, Andrea

    This paper develops a real-time algorithmic framework for aggregations of distributed energy resources (DERs) in distribution networks to provide regulation services in response to transmission-level requests. Leveraging online primal-dual-type methods for time-varying optimization problems and suitable linearizations of the nonlinear AC power-flow equations, we believe this work establishes the system-theoretic foundation to realize the vision of distribution-level virtual power plants. The optimization framework controls the output powers of dispatchable DERs such that, in aggregate, they respond to automatic-generation-control and/or regulation-services commands. This is achieved while concurrently regulating voltages within the feeder and maximizing customers' and utility's performance objectives. Convergence andmore » tracking capabilities are analytically established under suitable modeling assumptions. Simulations are provided to validate the proposed approach.« less

  4. Volatile decision dynamics: experiments, stochastic description, intermittency control and traffic optimization

    NASA Astrophysics Data System (ADS)

    Helbing, Dirk; Schönhof, Martin; Kern, Daniel

    2002-06-01

    The coordinated and efficient distribution of limited resources by individual decisions is a fundamental, unsolved problem. When individuals compete for road capacities, time, space, money, goods, etc, they normally make decisions based on aggregate rather than complete information, such as TV news or stock market indices. In related experiments, we have observed a volatile decision dynamics and far-from-optimal payoff distributions. We have also identified methods of information presentation that can considerably improve the overall performance of the system. In order to determine optimal strategies of decision guidance by means of user-specific recommendations, a stochastic behavioural description is developed. These strategies manage to increase the adaptibility to changing conditions and to reduce the deviation from the time-dependent user equilibrium, thereby enhancing the average and individual payoffs. Hence, our guidance strategies can increase the performance of all users by reducing overreaction and stabilizing the decision dynamics. These results are highly significant for predicting decision behaviour, for reaching optimal behavioural distributions by decision support systems and for information service providers. One of the promising fields of application is traffic optimization.

  5. A novel high-performance self-powered ultraviolet photodetector: Concept, analytical modeling and analysis

    NASA Astrophysics Data System (ADS)

    Ferhati, H.; Djeffal, F.

    2017-12-01

    In this paper, a new MSM-UV-photodetector (PD) based on dual wide band-gap material (DM) engineering aspect is proposed to achieve high-performance self-powered device. Comprehensive analytical models for the proposed sensor photocurrent and the device properties are developed incorporating the impact of DM aspect on the device photoelectrical behavior. The obtained results are validated with the numerical data using commercial TCAD software. Our investigation demonstrates that the adopted design amendment modulates the electric field in the device, which provides the possibility to drive appropriate photo-generated carriers without an external applied voltage. This phenomenon suggests achieving the dual role of effective carriers' separation and an efficient reduce of the dark current. Moreover, a new hybrid approach based on analytical modeling and Particle Swarm Optimization (PSO) is proposed to achieve improved photoelectric behavior at zero bias that can ensure favorable self-powered MSM-based UV-PD. It is found that the proposed design methodology has succeeded in identifying the optimized design that offers a self-powered device with high-responsivity (98 mA/W) and superior ION/IOFF ratio (480 dB). These results make the optimized MSM-UV-DM-PD suitable for providing low cost self-powered devices for high-performance optical communication and monitoring applications.

  6. Hybrid, experimental and computational, investigation of mechanical components

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    1996-07-01

    Computational and experimental methodologies have unique features for the analysis and solution of a wide variety of engineering problems. Computations provide results that depend on selection of input parameters such as geometry, material constants, and boundary conditions which, for correct modeling purposes, have to be appropriately chosen. In addition, it is relatively easy to modify the input parameters in order to computationally investigate different conditions. Experiments provide solutions which characterize the actual behavior of the object of interest subjected to specific operating conditions. However, it is impractical to experimentally perform parametric investigations. This paper discusses the use of a hybrid, computational and experimental, approach for study and optimization of mechanical components. Computational techniques are used for modeling the behavior of the object of interest while it is experimentally tested using noninvasive optical techniques. Comparisons are performed through a fringe predictor program used to facilitate the correlation between both techniques. In addition, experimentally obtained quantitative information, such as displacements and shape, can be applied in the computational model in order to improve this correlation. The result is a validated computational model that can be used for performing quantitative analyses and structural optimization. Practical application of the hybrid approach is illustrated with a representative example which demonstrates the viability of the approach as an engineering tool for structural analysis and optimization.

  7. Performance evaluation and parameter sensitivity of energy-harvesting shock absorbers on different vehicles

    NASA Astrophysics Data System (ADS)

    Guo, Sijing; Liu, Yilun; Xu, Lin; Guo, Xuexun; Zuo, Lei

    2016-07-01

    Traditional shock absorbers provide favourable ride comfort and road handling by dissipating the suspension vibration energy into heat waste. In order to harvest this dissipated energy and improve the vehicle fuel efficiency, many energy-harvesting shock absorbers (EHSAs) have been proposed in recent years. Among them, two types of EHSAs have attracted much attention. One is a traditional EHSA which converts the oscillatory vibration into bidirectional rotation using rack-pinion, ball-screw or other mechanisms. The other EHSA is equipped with a mechanical motion rectifier (MMR) that transforms the bidirectional vibration into unidirectional rotation. Hereinafter, they are referred to as NonMMR-EHSA and MMR-EHSA, respectively. This paper compares their performances with the corresponding traditional shock absorber by using closed-form analysis and numerical simulations on various types of vehicles, including passenger cars, buses and trucks. Results suggest that MMR-EHSA provides better ride performances than NonMMR-EHSA, and that MMR-EHSA is able to improve both the ride comfort and road handling simultaneously over the traditional shock absorber when installed on light-damped, heavy-duty vehicles. Additionally, the optimal parameters of MMR-EHSA are obtained for ride comfort. The optimal solutions ('Pareto-optimal solutions') are also obtained by considering the trade-off between ride comfort and road handling.

  8. Advanced Information Technology in Simulation Based Life Cycle Design

    NASA Technical Reports Server (NTRS)

    Renaud, John E.

    2003-01-01

    In this research a Collaborative Optimization (CO) approach for multidisciplinary systems design is used to develop a decision based design framework for non-deterministic optimization. To date CO strategies have been developed for use in application to deterministic systems design problems. In this research the decision based design (DBD) framework proposed by Hazelrigg is modified for use in a collaborative optimization framework. The Hazelrigg framework as originally proposed provides a single level optimization strategy that combines engineering decisions with business decisions in a single level optimization. By transforming this framework for use in collaborative optimization one can decompose the business and engineering decision making processes. In the new multilevel framework of Decision Based Collaborative Optimization (DBCO) the business decisions are made at the system level. These business decisions result in a set of engineering performance targets that disciplinary engineering design teams seek to satisfy as part of subspace optimizations. The Decision Based Collaborative Optimization framework more accurately models the existing relationship between business and engineering in multidisciplinary systems design.

  9. Multidisciplinary Optimization for Aerospace Using Genetic Optimization

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Hahn, Edward E.; Herrera, Claudia Y.

    2007-01-01

    In support of the ARMD guidelines NASA's Dryden Flight Research Center is developing a multidisciplinary design and optimization tool This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Optimization has made its way into many mainstream applications. For example NASTRAN(TradeMark) has its solution sequence 200 for Design Optimization, and MATLAB(TradeMark) has an Optimization Tool box. Other packages, such as ZAERO(TradeMark) aeroelastic panel code and the CFL3D(TradeMark) Navier-Stokes solver have no built in optimizer. The goal of the tool development is to generate a central executive capable of using disparate software packages ina cross platform network environment so as to quickly perform optimization and design tasks in a cohesive streamlined manner. A provided figure (Figure 1) shows a typical set of tools and their relation to the central executive. Optimization can take place within each individual too, or in a loop between the executive and the tool, or both.

  10. Processing and fabrication of mixed uranium/refractory metal carbide fuels with liquid-phase sintering

    NASA Astrophysics Data System (ADS)

    Knight, Travis W.; Anghaie, Samim

    2002-11-01

    Optimization of powder processing techniques were sought for the fabrication of single-phase, solid-solution mixed uranium/refractory metal carbide nuclear fuels - namely (U, Zr, Nb)C. These advanced, ultra-high temperature nuclear fuels have great potential for improved performance over graphite matrix, dispersed fuels tested in the Rover/NERVA program of the 1960s and early 1970s. Hypostoichiometric fuel samples with carbon-to-metal ratios of 0.98, uranium metal mole fractions of 5% and 10%, and porosities less than 5% were fabricated. These qualities should provide for the longest life and highest performance capability for these fuels. Study and optimization of processing methods were necessary to provide the quality assurance of samples for meaningful testing and assessment of performance for nuclear thermal propulsion applications. The processing parameters and benefits of enhanced sintering by uranium carbide liquid-phase sintering were established for the rapid and effective consolidation and formation of a solid-solution mixed carbide nuclear fuel.

  11. An optimal autonomous microgrid cluster based on distributed generation droop parameter optimization and renewable energy sources using an improved grey wolf optimizer

    NASA Astrophysics Data System (ADS)

    Moazami Goodarzi, Hamed; Kazemi, Mohammad Hosein

    2018-05-01

    Microgrid (MG) clustering is regarded as an important driver in improving the robustness of MGs. However, little research has been conducted on providing appropriate MG clustering. This article addresses this shortfall. It proposes a novel multi-objective optimization approach for finding optimal clustering of autonomous MGs by focusing on variables such as distributed generation (DG) droop parameters, the location and capacity of DG units, renewable energy sources, capacitors and powerline transmission. Power losses are minimized and voltage stability is improved while virtual cut-set lines with minimum power transmission for clustering MGs are obtained. A novel chaotic grey wolf optimizer (CGWO) algorithm is applied to solve the proposed multi-objective problem. The performance of the approach is evaluated by utilizing a 69-bus MG in several scenarios.

  12. Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.

    Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this papermore » shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.« less

  13. Gate length scaling optimization of FinFETs

    NASA Astrophysics Data System (ADS)

    Chen, Shoumian; Shang, Enming; Hu, Shaojian

    2018-06-01

    This paper introduces a device performance optimization approach for the FinFET through optimization of the gate length. As a result of reducing the gate length, the leakage current (Ioff) increases, and consequently, the stress along the channel enhances which leads to an increase in the drive current (Isat) of the PMOS. In order to sustain Ioff, work function is adjusted to offset the effect of the increased stress. Changing the gate length of the transistor yields different drive currents when the leakage current is fixed by adjusting the work function. For a given device, an optimal gate length is found to provide the highest drive current. As an example, for a standard performance device with Ioff = 1 nA/um, the best performance Isat = 856 uA/um is at L = 34 nm for 14 nm FinFET and Isat = 1130 uA/um at L = 21 nm for 7 nm FinFET. A 7 nm FinFET will exhibit performance boost of 32% comparing with 14 nm FinFET. However, applying the same method to a 5 nm FinFET, the performance boosting is out of expectance comparing to the 7 nm FinFET, which is due to the severe short-channel-effect and the exhausted channel stress in the FinFET.

  14. Optimizing performance through intrinsic motivation and attention for learning: The OPTIMAL theory of motor learning.

    PubMed

    Wulf, Gabriele; Lewthwaite, Rebecca

    2016-10-01

    Effective motor performance is important for surviving and thriving, and skilled movement is critical in many activities. Much theorizing over the past few decades has focused on how certain practice conditions affect the processing of task-related information to affect learning. Yet, existing theoretical perspectives do not accommodate significant recent lines of evidence demonstrating motivational and attentional effects on performance and learning. These include research on (a) conditions that enhance expectancies for future performance, (b) variables that influence learners' autonomy, and (c) an external focus of attention on the intended movement effect. We propose the OPTIMAL (Optimizing Performance through Intrinsic Motivation and Attention for Learning) theory of motor learning. We suggest that motivational and attentional factors contribute to performance and learning by strengthening the coupling of goals to actions. We provide explanations for the performance and learning advantages of these variables on psychological and neuroscientific grounds. We describe a plausible mechanism for expectancy effects rooted in responses of dopamine to the anticipation of positive experience and temporally associated with skill practice. Learner autonomy acts perhaps largely through an enhanced expectancy pathway. Furthermore, we consider the influence of an external focus for the establishment of efficient functional connections across brain networks that subserve skilled movement. We speculate that enhanced expectancies and an external focus propel performers' cognitive and motor systems in productive "forward" directions and prevent "backsliding" into self- and non-task focused states. Expected success presumably breeds further success and helps consolidate memories. We discuss practical implications and future research directions.

  15. Coupled Solid Rocket Motor Ballistics and Trajectory Modeling for Higher Fidelity Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Ables, Brett

    2014-01-01

    Multi-stage launch vehicles with solid rocket motors (SRMs) face design optimization challenges, especially when the mission scope changes frequently. Significant performance benefits can be realized if the solid rocket motors are optimized to the changing requirements. While SRMs represent a fixed performance at launch, rapid design iterations enable flexibility at design time, yielding significant performance gains. The streamlining and integration of SRM design and analysis can be achieved with improved analysis tools. While powerful and versatile, the Solid Performance Program (SPP) is not conducive to rapid design iteration. Performing a design iteration with SPP and a trajectory solver is a labor intensive process. To enable a better workflow, SPP, the Program to Optimize Simulated Trajectories (POST), and the interfaces between them have been improved and automated, and a graphical user interface (GUI) has been developed. The GUI enables real-time visual feedback of grain and nozzle design inputs, enforces parameter dependencies, removes redundancies, and simplifies manipulation of SPP and POST's numerous options. Automating the analysis also simplifies batch analyses and trade studies. Finally, the GUI provides post-processing, visualization, and comparison of results. Wrapping legacy high-fidelity analysis codes with modern software provides the improved interface necessary to enable rapid coupled SRM ballistics and vehicle trajectory analysis. Low cost trade studies demonstrate the sensitivities of flight performance metrics to propulsion characteristics. Incorporating high fidelity analysis from SPP into vehicle design reduces performance margins and improves reliability. By flying an SRM designed with the same assumptions as the rest of the vehicle, accurate comparisons can be made between competing architectures. In summary, this flexible workflow is a critical component to designing a versatile launch vehicle model that can accommodate a volatile mission scope.

  16. A novel spatial performance metric for robust pattern optimization of distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Stisen, S.; Demirel, C.; Koch, J.

    2017-12-01

    Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing platforms. We see great potential of spaef across environmental disciplines dealing with spatially distributed modelling.

  17. Task-Driven Tube Current Modulation and Regularization Design in Computed Tomography with Penalized-Likelihood Reconstruction.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2016-02-01

    This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.

  18. Quantifying Performance Bias in Label Fusion

    DTIC Science & Technology

    2012-08-21

    detect ), may provide the end-user with the means to appropriately adjust the performance and optimal thresholds for performance by fusing legacy systems...boolean combination of classification systems in ROC space: An application to anomaly detection with HMMs. Pattern Recognition, 43(8), 2732-2752. 10...Shamsuddin, S. (2009). An overview of neural networks use in anomaly intrusion detection systems. Paper presented at the Research and Development (SCOReD

  19. Complex optimization for big computational and experimental neutron datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  20. Complex optimization for big computational and experimental neutron datasets

    DOE PAGES

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...

    2016-11-07

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  1. Program manual for ASTOP, an Arbitrary space trajectory optimization program

    NASA Technical Reports Server (NTRS)

    Horsewood, J. L.

    1974-01-01

    The ASTOP program (an Arbitrary Space Trajectory Optimization Program) designed to generate optimum low-thrust trajectories in an N-body field while satisfying selected hardware and operational constraints is presented. The trajectory is divided into a number of segments or arcs over which the control is held constant. This constant control over each arc is optimized using a parameter optimization scheme based on gradient techniques. A modified Encke formulation of the equations of motion is employed. The program provides a wide range of constraint, end conditions, and performance index options. The basic approach is conducive to future expansion of features such as the incorporation of new constraints and the addition of new end conditions.

  2. Performance dependence of hybrid x-ray computed tomography/fluorescence molecular tomography on the optical forward problem.

    PubMed

    Hyde, Damon; Schulz, Ralf; Brooks, Dana; Miller, Eric; Ntziachristos, Vasilis

    2009-04-01

    Hybrid imaging systems combining x-ray computed tomography (CT) and fluorescence tomography can improve fluorescence imaging performance by incorporating anatomical x-ray CT information into the optical inversion problem. While the use of image priors has been investigated in the past, little is known about the optimal use of forward photon propagation models in hybrid optical systems. In this paper, we explore the impact on reconstruction accuracy of the use of propagation models of varying complexity, specifically in the context of these hybrid imaging systems where significant structural information is known a priori. Our results demonstrate that the use of generically known parameters provides near optimal performance, even when parameter mismatch remains.

  3. A New Model for Optimal Mechanical and Thermal Performance of Cement-Based Partition Wall

    PubMed Central

    Huang, Shiping; Hu, Mengyu; Cui, Nannan; Wang, Weifeng

    2018-01-01

    The prefabricated cement-based partition wall has been widely used in assembled buildings because of its high manufacturing efficiency, high-quality surface, and simple and convenient construction process. In this paper, a general porous partition wall that is made from cement-based materials was proposed to meet the optimal mechanical and thermal performance during transportation, construction and its service life. The porosity of the proposed partition wall is formed by elliptic-cylinder-type cavities. The finite element method was used to investigate the mechanical and thermal behaviour, which shows that the proposed model has distinct advantages over the current partition wall that is used in the building industry. It is found that, by controlling the eccentricity of the elliptic-cylinder cavities, the proposed wall stiffness can be adjusted to respond to the imposed loads and to improve the thermal performance, which can be used for the optimum design. Finally, design guidance is provided to obtain the optimal mechanical and thermal performance. The proposed model could be used as a promising candidate for partition wall in the building industry. PMID:29673176

  4. Validation of a computer code for analysis of subsonic aerodynamic performance of wings with flaps in combination with a canard or horizontal tail and an application to optimization

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.

    1990-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).

  5. A New Model for Optimal Mechanical and Thermal Performance of Cement-Based Partition Wall.

    PubMed

    Huang, Shiping; Hu, Mengyu; Huang, Yonghui; Cui, Nannan; Wang, Weifeng

    2018-04-17

    The prefabricated cement-based partition wall has been widely used in assembled buildings because of its high manufacturing efficiency, high-quality surface, and simple and convenient construction process. In this paper, a general porous partition wall that is made from cement-based materials was proposed to meet the optimal mechanical and thermal performance during transportation, construction and its service life. The porosity of the proposed partition wall is formed by elliptic-cylinder-type cavities. The finite element method was used to investigate the mechanical and thermal behaviour, which shows that the proposed model has distinct advantages over the current partition wall that is used in the building industry. It is found that, by controlling the eccentricity of the elliptic-cylinder cavities, the proposed wall stiffness can be adjusted to respond to the imposed loads and to improve the thermal performance, which can be used for the optimum design. Finally, design guidance is provided to obtain the optimal mechanical and thermal performance. The proposed model could be used as a promising candidate for partition wall in the building industry.

  6. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  7. a Comparison of Simulated Annealing, Genetic Algorithm and Particle Swarm Optimization in Optimal First-Order Design of Indoor Tls Networks

    NASA Astrophysics Data System (ADS)

    Jia, F.; Lichti, D.

    2017-09-01

    The optimal network design problem has been well addressed in geodesy and photogrammetry but has not received the same attention for terrestrial laser scanner (TLS) networks. The goal of this research is to develop a complete design system that can automatically provide an optimal plan for high-accuracy, large-volume scanning networks. The aim in this paper is to use three heuristic optimization methods, simulated annealing (SA), genetic algorithm (GA) and particle swarm optimization (PSO), to solve the first-order design (FOD) problem for a small-volume indoor network and make a comparison of their performances. The room is simplified as discretized wall segments and possible viewpoints. Each possible viewpoint is evaluated with a score table representing the wall segments visible from each viewpoint based on scanning geometry constraints. The goal is to find a minimum number of viewpoints that can obtain complete coverage of all wall segments with a minimal sum of incidence angles. The different methods have been implemented and compared in terms of the quality of the solutions, runtime and repeatability. The experiment environment was simulated from a room located on University of Calgary campus where multiple scans are required due to occlusions from interior walls. The results obtained in this research show that PSO and GA provide similar solutions while SA doesn't guarantee an optimal solution within limited iterations. Overall, GA is considered as the best choice for this problem based on its capability of providing an optimal solution and fewer parameters to tune.

  8. Magnetostrictive materials and method for improving AC characteristics in same

    DOEpatents

    Pulvirenti, Patricia P.; Jiles, David C.

    2001-08-14

    The present invention provides Terfenol-D alloys ("doped" Terfenol) having optimized performances under the condition of time-dependent magnetic fields. In one embodiment, performance is optimized by lowering the conductivity of Terfenol, thereby improving the frequency response. This can be achieved through addition of Group III or IV elements, such as Si and Al. Addition of these types of elements provides scattering sites for conduction electrons, thereby increasing resistivity by 125% which leads to an average increase in penetration depth of 80% at 1 kHz and an increase in energy conversion efficiency of 55%. The permeability of doped Terfenol remains constant over a wider frequency range as compared with undoped Terfenol. These results demonstrate that adding impurities, such as Si and Al, are effective in improving the ac characteristics of Terfenol. A magnetoelastic Gruneisen parameter, .gamma..sub.me, has also been derived from the thermodynamic equations of state, and provides another means by which to characterize the coupling efficiency in magnetostrictive materials on a more fundamental basis.

  9. Multijunction Solar Cell Technology for Mars Surface Applications

    NASA Technical Reports Server (NTRS)

    Stella, Paul M.; Mardesich, Nick; Ewell, Richard C.; Mueller, Robert L.; Endicter, Scott; Aiken, Daniel; Edmondson, Kenneth; Fetze, Chris

    2006-01-01

    Solar cells used for Mars surface applications have been commercial space qualified AM0 optimized devices. Due to the Martian atmosphere, these cells are not optimized for the Mars surface and as a result operate at a reduced efficiency. A multi-year program, MOST (Mars Optimized Solar Cell Technology), managed by JPL and funded by NASA Code S, was initiated in 2004, to develop tools to modify commercial AM0 cells for the Mars surface solar spectrum and to fabricate Mars optimized devices for verification. This effort required defining the surface incident spectrum, developing an appropriate laboratory solar simulator measurement capability, and to develop and test commercial cells modified for the Mars surface spectrum. This paper discusses the program, including results for the initial modified cells. Simulated Mars surface measurements of MER cells and Phoenix Lander cells (2007 launch) are provided to characterize the performance loss for those missions. In addition, the performance of the MER rover solar arrays is updated to reflect their more than two (2) year operation.

  10. Wet cooling towers: rule-of-thumb design and simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leeper, Stephen A.

    1981-07-01

    A survey of wet cooling tower literature was performed to develop a simplified method of cooling tower design and simulation for use in power plant cycle optimization. The theory of heat exchange in wet cooling towers is briefly summarized. The Merkel equation (the fundamental equation of heat transfer in wet cooling towers) is presented and discussed. The cooling tower fill constant (Ka) is defined and values derived. A rule-of-thumb method for the optimized design of cooling towers is presented. The rule-of-thumb design method provides information useful in power plant cycle optimization, including tower dimensions, water consumption rate, exit air temperature,more » power requirements and construction cost. In addition, a method for simulation of cooling tower performance at various operating conditions is presented. This information is also useful in power plant cycle evaluation. Using the information presented, it will be possible to incorporate wet cooling tower design and simulation into a procedure to evaluate and optimize power plant cycles.« less

  11. FSMRank: feature selection algorithm for learning to rank.

    PubMed

    Lai, Han-Jiang; Pan, Yan; Tang, Yong; Yu, Rong

    2013-06-01

    In recent years, there has been growing interest in learning to rank. The introduction of feature selection into different learning problems has been proven effective. These facts motivate us to investigate the problem of feature selection for learning to rank. We propose a joint convex optimization formulation which minimizes ranking errors while simultaneously conducting feature selection. This optimization formulation provides a flexible framework in which we can easily incorporate various importance measures and similarity measures of the features. To solve this optimization problem, we use the Nesterov's approach to derive an accelerated gradient algorithm with a fast convergence rate O(1/T(2)). We further develop a generalization bound for the proposed optimization problem using the Rademacher complexities. Extensive experimental evaluations are conducted on the public LETOR benchmark datasets. The results demonstrate that the proposed method shows: 1) significant ranking performance gain compared to several feature selection baselines for ranking, and 2) very competitive performance compared to several state-of-the-art learning-to-rank algorithms.

  12. Optimizing the number of steps in learning tasks for complex skills.

    PubMed

    Nadolski, Rob J; Kirschner, Paul A; van Merriënboer, Jeroen J G

    2005-06-01

    Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimized for efficient and effective learning. The aim of the study is to investigate the relation between the number of steps provided to learners and the quality of their learning of complex skills. It is hypothesized that students receiving an optimized number of steps will learn better than those receiving either the whole task in only one step or those receiving a large number of steps. Participants were 35 sophomore law students studying at Dutch universities, mean age=22.8 years (SD=3.5), 63% were female. Participants were randomly assigned to 1 of 3 computer-delivered versions of a multimedia programme on how to prepare and carry out a law plea. The versions differed only in the number of learning steps provided. Videotaped plea-performance results were determined, various related learning measures were acquired and all computer actions were logged and analyzed. Participants exposed to an intermediate (i.e. optimized) number of steps outperformed all others on the compulsory learning task. No differences in performance on a transfer task were found. A high number of steps proved to be less efficient for carrying out the learning task. An intermediate number of steps is the most effective, proving that the number of steps can be optimized for improving learning.

  13. Optimal and Adaptive Control of Flow in a Thermal Convection Loop

    NASA Astrophysics Data System (ADS)

    Yuen, Po Ki; Bau, Haim

    1998-11-01

    In theory and experiment, we use nonlinear and linear optimal and adaptive controllers to suppress the naturally occurring chaotic convection in a thermal convection loop. The thermal convection loop is a simple experimental analog of the Lorenz equations, and it provides a convenient platform for testing and comparing the performance of various control strategies in a fluid mechanical setting. The performance of the optimal and adaptive controllers is compared with that of a previously developed simple feedback controller (Singer, J., Wang, Y., & Bau, H., H., 1991, Physical Review Letters, 66,123-1125.)(Wang, Y., Singer, J., & Bau, H., H., 1992, J. Fluid Mechanics, 237, 479-498.), a nonlinear controller with a cubic nonlinearity(Yuen, P., & Bau, H., H., 1996, J. Fluid Mechanics, 317, 91-109.), and a neural net controller(Yuen, P., & Bau, H., H., 1998, Neural Networks, 11, 557 - 569, 1998.). It is demonstrated that an adaptive controller can perform successfully even when the system's model is not known.

  14. Low cost Ku-band earth terminals for voice/data/facsimile

    NASA Technical Reports Server (NTRS)

    Kelley, R. L.

    1977-01-01

    A Ku-band satellite earth terminal capable of providing two way voice/facsimile teleconferencing, 128 Kbps data, telephone, and high-speed imagery services is proposed. Optimized terminal cost and configuration are presented as a function of FDMA and TDMA approaches to multiple access. The entire terminal from the antenna to microphones, speakers and facsimile equipment is considered. Component cost versus performance has been projected as a function of size of the procurement and predicted hardware innovations and production techniques through 1985. The lowest cost combinations of components has been determined in a computer optimization algorithm. The system requirements including terminal EIRP and G/T, satellite size, power per spacecraft transponder, satellite antenna characteristics, and link propagation outage were selected using a computerized system cost/performance optimization algorithm. System cost and terminal cost and performance requirements are presented as a function of the size of a nationwide U.S. network. Service costs are compared with typical conference travel costs to show the viability of the proposed terminal.

  15. Optimization and validation of moving average quality control procedures using bias detection curves and moving average validation charts.

    PubMed

    van Rossum, Huub H; Kemperman, Hans

    2017-02-01

    To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.

  16. Assessing performance in complex team environments.

    PubMed

    Whitmore, Jeffrey N

    2005-07-01

    This paper provides a brief introduction to team performance assessment. It highlights some critical aspects leading to the successful measurement of team performance in realistic console operations; discusses the idea of process and outcome measures; presents two types of team data collection systems; and provides an example of team performance assessment. Team performance assessment is a complicated endeavor relative to assessing individual performance. Assessing team performance necessitates a clear understanding of each operator's task, both at the individual and team level, and requires planning for efficient data capture and analysis. Though team performance assessment requires considerable effort, the results can be very worthwhile. Most tasks performed in Command and Control environments are team tasks, and understanding this type of performance is becoming increasingly important to the evaluation of mission success and for overall system optimization.

  17. The Business Change Initiative: A Novel Approach to Improved Cost and Schedule Management

    NASA Technical Reports Server (NTRS)

    Shinn, Stephen A.; Bryson, Jonathan; Klein, Gerald; Lunz-Ruark, Val; Majerowicz, Walt; McKeever, J.; Nair, Param

    2016-01-01

    Goddard Space Flight Center's Flight Projects Directorate employed a Business Change Initiative (BCI) to infuse a series of activities coordinated to drive improved cost and schedule performance across Goddard's missions. This sustaining change framework provides a platform to manage and implement cost and schedule control techniques throughout the project portfolio. The BCI concluded in December 2014, deploying over 100 cost and schedule management changes including best practices, tools, methods, training, and knowledge sharing. The new business approach has driven the portfolio to improved programmatic performance. The last eight launched GSFC missions have optimized cost, schedule, and technical performance on a sustained basis to deliver on time and within budget, returning funds in many cases. While not every future mission will boast such strong performance, improved cost and schedule tools, management practices, and ongoing comprehensive evaluations of program planning and control methods to refine and implement best practices will continue to provide a framework for sustained performance. This paper will describe the tools, techniques, and processes developed during the BCI and the utilization of collaborative content management tools to disseminate project planning and control techniques to ensure continuous collaboration and optimization of cost and schedule management in the future.

  18. Joint optimization of maintenance, buffers and machines in manufacturing lines

    NASA Astrophysics Data System (ADS)

    Nahas, Nabil; Nourelfath, Mustapha

    2018-01-01

    This article considers a series manufacturing line composed of several machines separated by intermediate buffers of finite capacity. The goal is to find the optimal number of preventive maintenance actions performed on each machine, the optimal selection of machines and the optimal buffer allocation plan that minimize the total system cost, while providing the desired system throughput level. The mean times between failures of all machines are assumed to increase when applying periodic preventive maintenance. To estimate the production line throughput, a decomposition method is used. The decision variables in the formulated optimal design problem are buffer levels, types of machines and times between preventive maintenance actions. Three heuristic approaches are developed to solve the formulated combinatorial optimization problem. The first heuristic consists of a genetic algorithm, the second is based on the nonlinear threshold accepting metaheuristic and the third is an ant colony system. The proposed heuristics are compared and their efficiency is shown through several numerical examples. It is found that the nonlinear threshold accepting algorithm outperforms the genetic algorithm and ant colony system, while the genetic algorithm provides better results than the ant colony system for longer manufacturing lines.

  19. Optimizing a reconfigurable material via evolutionary computation

    NASA Astrophysics Data System (ADS)

    Wilken, Sam; Miskin, Marc Z.; Jaeger, Heinrich M.

    2015-08-01

    Rapid prototyping by combining evolutionary computation with simulations is becoming a powerful tool for solving complex design problems in materials science. This method of optimization operates in a virtual design space that simulates potential material behaviors and after completion needs to be validated by experiment. However, in principle an evolutionary optimizer can also operate on an actual physical structure or laboratory experiment directly, provided the relevant material parameters can be accessed by the optimizer and information about the material's performance can be updated by direct measurements. Here we provide a proof of concept of such direct, physical optimization by showing how a reconfigurable, highly nonlinear material can be tuned to respond to impact. We report on an entirely computer controlled laboratory experiment in which a 6 ×6 grid of electromagnets creates a magnetic field pattern that tunes the local rigidity of a concentrated suspension of ferrofluid and iron filings. A genetic algorithm is implemented and tasked to find field patterns that minimize the force transmitted through the suspension. Searching within a space of roughly 1010 possible configurations, after testing only 1500 independent trials the algorithm identifies an optimized configuration of layered rigid and compliant regions.

  20. Acceleration-Augmented LQG Control of an Active Magnetic Bearing

    NASA Technical Reports Server (NTRS)

    Feeley, Joseph J.

    1993-01-01

    A linear-quadratic-gaussian (LQG) regulator controller design for an acceleration-augmented active magnetic bearing (AMB) is outlined. Acceleration augmentation is a key feature in providing improved dynamic performance of the controller. The optimal control formulation provides a convenient method of trading-off fast transient response and force attenuation as control objectives.

  1. Electrolyzers Enhancing Flexibility in Electric Grids

    DOE PAGES

    Mohanpurkar, Manish; Luo, Yusheng; Terlip, Danny; ...

    2017-11-10

    This paper presents a real-time simulation with a hardware-in-the-loop (HIL)-based approach for verifying the performance of electrolyzer systems in providing grid support. Hydrogen refueling stations may use electrolyzer systems to generate hydrogen and are proposed to have the potential of becoming smarter loads that can proactively provide grid services. On the basis of experimental findings, electrolyzer systems with balance of plant are observed to have a high level of controllability and hence can add flexibility to the grid from the demand side. A generic front end controller (FEC) is proposed, which enables an optimal operation of the load on themore » basis of market and grid conditions. This controller has been simulated and tested in a real-time environment with electrolyzer hardware for a performance assessment. It can optimize the operation of electrolyzer systems on the basis of the information collected by a communication module. Real-time simulation tests are performed to verify the performance of the FEC-driven electrolyzers to provide grid support that enables flexibility, greater economic revenue, and grid support for hydrogen producers under dynamic conditions. In conclusion, the FEC proposed in this paper is tested with electrolyzers, however, it is proposed as a generic control topology that is applicable to any load.« less

  2. Fuzzy controller training using particle swarm optimization for nonlinear system control.

    PubMed

    Karakuzu, Cihan

    2008-04-01

    This paper proposes and describes an effective utilization of particle swarm optimization (PSO) to train a Takagi-Sugeno (TS)-type fuzzy controller. Performance evaluation of the proposed fuzzy training method using the obtained simulation results is provided with two samples of highly nonlinear systems: a continuous stirred tank reactor (CSTR) and a Van der Pol (VDP) oscillator. The superiority of the proposed learning technique is that there is no need for a partial derivative with respect to the parameter for learning. This fuzzy learning technique is suitable for real-time implementation, especially if the system model is unknown and a supervised training cannot be run. In this study, all parameters of the controller are optimized with PSO in order to prove that a fuzzy controller trained by PSO exhibits a good control performance.

  3. Shared prefetching to reduce execution skew in multi-threaded systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichenberger, Alexandre E; Gunnels, John A

    Mechanisms are provided for optimizing code to perform prefetching of data into a shared memory of a computing device that is shared by a plurality of threads that execute on the computing device. A memory stream of a portion of code that is shared by the plurality of threads is identified. A set of prefetch instructions is distributed across the plurality of threads. Prefetch instructions are inserted into the instruction sequences of the plurality of threads such that each instruction sequence has a separate sub-portion of the set of prefetch instructions, thereby generating optimized code. Executable code is generated basedmore » on the optimized code and stored in a storage device. The executable code, when executed, performs the prefetches associated with the distributed set of prefetch instructions in a shared manner across the plurality of threads.« less

  4. Topologically Optimized Nano-Positioning Stage Integrating with a Capacitive Comb Sensor.

    PubMed

    Chen, Tao; Wang, Yaqiong; Liu, Huicong; Yang, Zhan; Wang, Pengbo; Sun, Lining

    2017-01-28

    Nano-positioning technology has been widely used in many fields, such as microelectronics, optical engineering, and micro manufacturing. This paper presents a one-dimensional (1D) nano-positioning system, adopting a piezoelectric ceramic (PZT) actuator and a multi-objective topological optimal structure. The combination of a nano-positioning stage and a feedback capacitive comb sensor has been achieved. In order to obtain better performance, a wedge-shaped structure is used to apply the precise pre-tension for the piezoelectric ceramics. Through finite element analysis and experimental verification, better static performance and smaller kinetic coupling are achieved. The output displacement of the system achieves a long-stroke of up to 14.7 μm and high-resolution of less than 3 nm. It provides a flexible and efficient way in the design and optimization of the nano-positioning system.

  5. Optimization of the High-speed On-off Valve of an Automatic Transmission

    NASA Astrophysics Data System (ADS)

    Li-mei, ZHAO; Huai-chao, WU; Lei, ZHAO; Yun-xiang, LONG; Guo-qiao, LI; Shi-hao, TANG

    2018-03-01

    The response time of the high-speed on-off solenoid valve has a great influence on the performance of the automatic transmission. In order to reduce the response time of the high-speed on-off valve, the simulation model of the valve was built by use of AMESim and Ansoft Maxwell softwares. To reduce the response time, an objective function based on ITAE criterion was built and the Genetic Algorithms was used to optimize five parameters including circle number, working air gap, et al. The comparison between experiment and simulation shows that the model is verified. After optimization, the response time of the valve is reduced by 38.16%, the valve can meet the demands of the automatic transmission well. The results can provide theoretical reference for the improvement of automatic transmission performance.

  6. Topologically Optimized Nano-Positioning Stage Integrating with a Capacitive Comb Sensor

    PubMed Central

    Chen, Tao; Wang, Yaqiong; Liu, Huicong; Yang, Zhan; Wang, Pengbo; Sun, Lining

    2017-01-01

    Nano-positioning technology has been widely used in many fields, such as microelectronics, optical engineering, and micro manufacturing. This paper presents a one-dimensional (1D) nano-positioning system, adopting a piezoelectric ceramic (PZT) actuator and a multi-objective topological optimal structure. The combination of a nano-positioning stage and a feedback capacitive comb sensor has been achieved. In order to obtain better performance, a wedge-shaped structure is used to apply the precise pre-tension for the piezoelectric ceramics. Through finite element analysis and experimental verification, better static performance and smaller kinetic coupling are achieved. The output displacement of the system achieves a long-stroke of up to 14.7 μm and high-resolution of less than 3 nm. It provides a flexible and efficient way in the design and optimization of the nano-positioning system. PMID:28134854

  7. Reliability-based structural optimization: A proposed analytical-experimental study

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Nikolaidis, Efstratios

    1993-01-01

    An analytical and experimental study for assessing the potential of reliability-based structural optimization is proposed and described. In the study, competing designs obtained by deterministic and reliability-based optimization are compared. The experimental portion of the study is practical because the structure selected is a modular, actively and passively controlled truss that consists of many identical members, and because the competing designs are compared in terms of their dynamic performance and are not destroyed if failure occurs. The analytical portion of this study is illustrated on a 10-bar truss example. In the illustrative example, it is shown that reliability-based optimization can yield a design that is superior to an alternative design obtained by deterministic optimization. These analytical results provide motivation for the proposed study, which is underway.

  8. An effective model for ergonomic optimization applied to a new automotive assembly line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-08

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assemblymore » line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.« less

  9. Team Dynamics. Implications for Coaching.

    ERIC Educational Resources Information Center

    Freishlag, Jerry

    1985-01-01

    A recent survey of coaches ranks team cohesion as the most critical problem coaches face. Optimal interpersonal relationships among athletes and their coaches can maximize collective performance. Team dynamics are discussed and coaching tips are provided. (MT)

  10. Thermal/Structural Tailoring of Engine Blades (T/STAEBL) User's manual

    NASA Technical Reports Server (NTRS)

    Brown, K. W.

    1994-01-01

    The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a computer code that is able to perform numerical optimizations of cooled jet engine turbine blades and vanes. These optimizations seek an airfoil design of minimum operating cost that satisfies realistic design constraints. This report documents the organization of the T/STAEBL computer program, its design and analysis procedure, its optimization procedure, and provides an overview of the input required to run the program, as well as the computer resources required for its effective use. Additionally, usage of the program is demonstrated through a validation test case.

  11. Thermal/Structural Tailoring of Engine Blades (T/STAEBL): User's manual

    NASA Astrophysics Data System (ADS)

    Brown, K. W.

    1994-03-01

    The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a computer code that is able to perform numerical optimizations of cooled jet engine turbine blades and vanes. These optimizations seek an airfoil design of minimum operating cost that satisfies realistic design constraints. This report documents the organization of the T/STAEBL computer program, its design and analysis procedure, its optimization procedure, and provides an overview of the input required to run the program, as well as the computer resources required for its effective use. Additionally, usage of the program is demonstrated through a validation test case.

  12. Asymptotic analysis of SPTA-based algorithms for no-wait flow shop scheduling problem with release dates.

    PubMed

    Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang

    2014-01-01

    We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.

  13. Structural Tailoring of Advanced Turboprops (STAT)

    NASA Technical Reports Server (NTRS)

    Brown, Kenneth W.

    1988-01-01

    This interim report describes the progress achieved in the structural Tailoring of Advanced Turboprops (STAT) program which was developed to perform numerical optimizations on highly swept propfan blades. The optimization procedure seeks to minimize an objective function, defined as either direct operating cost or aeroelastic differences between a blade and its scaled model, by tuning internal and external geometry variables that must satisfy realistic blade design constraints. This report provides a detailed description of the input, optimization procedures, approximate analyses and refined analyses, as well as validation test cases for the STAT program. In addition, conclusions and recommendations are summarized.

  14. Asymptotic Analysis of SPTA-Based Algorithms for No-Wait Flow Shop Scheduling Problem with Release Dates

    PubMed Central

    Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang

    2014-01-01

    We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms. PMID:24764774

  15. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  16. On the balancing of structural and acoustic performance of a sandwich panel based on topology, property, and size optimization

    NASA Astrophysics Data System (ADS)

    Cameron, Christopher J.; Lind Nordgren, Eleonora; Wennhage, Per; Göransson, Peter

    2014-06-01

    Balancing structural and acoustic performance of a multi-layered sandwich panel is a formidable undertaking. Frequently the gains achieved in terms of reduced weight, still meeting the structural design requirements, are lost by the changes necessary to regain acceptable acoustic performance. To alleviate this, a design method for a multifunctional load bearing vehicle body panel is proposed which attempts to achieve a balance between structural and acoustic performance. The approach is based on numerical modelling of the structural and acoustic behaviour in a combined topology, size, and property optimization in order to achieve a three dimensional optimal distribution of structural and acoustic foam materials within the bounding surfaces of a sandwich panel. In particular the effects of the coupling between one of the bounding surface face sheets and acoustic foam are examined for its impact on both the structural and acoustic overall performance of the panel. The results suggest a potential in introducing an air gap between the acoustic foam parts and one of the face sheets, provided that the structural design constraints are met without prejudicing the layout of the different foam types.

  17. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modelling heteroscedastic residual errors

    NASA Astrophysics Data System (ADS)

    David, McInerney; Mark, Thyer; Dmitri, Kavetski; George, Kuczera

    2017-04-01

    This study provides guidance to hydrological researchers which enables them to provide probabilistic predictions of daily streamflow with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality). Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. It is commonly known that hydrological model residual errors are heteroscedastic, i.e. there is a pattern of larger errors in higher streamflow predictions. Although multiple approaches exist for representing this heteroscedasticity, few studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating 8 common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter, lambda) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and USA, and two lumped hydrological models. We find the choice of heteroscedastic error modelling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with lambda of 0.2 and 0.5, and the log scheme (lambda=0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  18. Using SpF to Achieve Petascale for Legacy Pseudospectral Applications

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Jiang, Weiyuan

    2014-01-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. Highlevel abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical kernels that can be performed entirely inprocessor. The granularity of domain decomposition provided by SpF is only constrained by the datalocality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe our experience in porting legacy pseudospectral models, MoSST and DYNAMO, to use SpF as well as present preliminary performance results provided by the improved scalability.

  19. Study on key technologies of optimization of big data for thermal power plant performance

    NASA Astrophysics Data System (ADS)

    Mao, Mingyang; Xiao, Hong

    2018-06-01

    Thermal power generation accounts for 70% of China's power generation, the pollutants accounted for 40% of the same kind of emissions, thermal power efficiency optimization needs to monitor and understand the whole process of coal combustion and pollutant migration, power system performance data show explosive growth trend, The purpose is to study the integration of numerical simulation of big data technology, the development of thermal power plant efficiency data optimization platform and nitrogen oxide emission reduction system for the thermal power plant to improve efficiency, energy saving and emission reduction to provide reliable technical support. The method is big data technology represented by "multi-source heterogeneous data integration", "large data distributed storage" and "high-performance real-time and off-line computing", can greatly enhance the energy consumption capacity of thermal power plants and the level of intelligent decision-making, and then use the data mining algorithm to establish the boiler combustion mathematical model, mining power plant boiler efficiency data, combined with numerical simulation technology to find the boiler combustion and pollutant generation rules and combustion parameters of boiler combustion and pollutant generation Influence. The result is to optimize the boiler combustion parameters, which can achieve energy saving.

  20. A Bandwidth-Optimized Multi-Core Architecture for Irregular Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    This paper presents an architecture template for next-generation high performance computing systems specifically targeted to irregular applications. We start our work by considering that future generation interconnection and memory bandwidth full-system numbers are expected to grow by a factor of 10. In order to keep up with such a communication capacity, while still resorting to fine-grained multithreading as the main way to tolerate unpredictable memory access latencies of irregular applications, we show how overall performance scaling can benefit from the multi-core paradigm. At the same time, we also show how such an architecture template must be coupled with specific techniquesmore » in order to optimize bandwidth utilization and achieve the maximum scalability. We propose a technique based on memory references aggregation, together with the related hardware implementation, as one of such optimization techniques. We explore the proposed architecture template by focusing on the Cray XMT architecture and, using a dedicated simulation infrastructure, validate the performance of our template with two typical irregular applications. Our experimental results prove the benefits provided by both the multi-core approach and the bandwidth optimization reference aggregation technique.« less

  1. Using Differential Evolution to Optimize Learning from Signals and Enhance Network Security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harmer, Paul K; Temple, Michael A; Buckner, Mark A

    2011-01-01

    Computer and communication network attacks are commonly orchestrated through Wireless Access Points (WAPs). This paper summarizes proof-of-concept research activity aimed at developing a physical layer Radio Frequency (RF) air monitoring capability to limit unauthorizedWAP access and mprove network security. This is done using Differential Evolution (DE) to optimize the performance of a Learning from Signals (LFS) classifier implemented with RF Distinct Native Attribute (RF-DNA) fingerprints. Performance of the resultant DE-optimized LFS classifier is demonstrated using 802.11a WiFi devices under the most challenging conditions of intra-manufacturer classification, i.e., using emissions of like-model devices that only differ in serial number. Using identicalmore » classifier input features, performance of the DE-optimized LFS classifier is assessed relative to a Multiple Discriminant Analysis / Maximum Likelihood (MDA/ML) classifier that has been used for previous demonstrations. The comparative assessment is made using both Time Domain (TD) and Spectral Domain (SD) fingerprint features. For all combinations of classifier type, feature type, and signal-to-noise ratio considered, results show that the DEoptimized LFS classifier with TD features is uperior and provides up to 20% improvement in classification accuracy with proper selection of DE parameters.« less

  2. Constraining neutron guide optimizations with phase-space considerations

    NASA Astrophysics Data System (ADS)

    Bertelsen, Mads; Lefmann, Kim

    2016-09-01

    We introduce a method named the Minimalist Principle that serves to reduce the parameter space for neutron guide optimization when the required beam divergence is limited. The reduced parameter space will restrict the optimization to guides with a minimal neutron intake that are still theoretically able to deliver the maximal possible performance. The geometrical constraints are derived using phase-space propagation from moderator to guide and from guide to sample, while assuming that the optimized guides will achieve perfect transport of the limited neutron intake. Guide systems optimized using these constraints are shown to provide performance close to guides optimized without any constraints, however the divergence received at the sample is limited to the desired interval, even when the neutron transport is not limited by the supermirrors used in the guide. As the constraints strongly limit the parameter space for the optimizer, two control parameters are introduced that can be used to adjust the selected subspace, effectively balancing between maximizing neutron transport and avoiding background from unnecessary neutrons. One parameter is needed to describe the expected focusing abilities of the guide to be optimized, going from perfectly focusing to no correlation between position and velocity. The second parameter controls neutron intake into the guide, so that one can select exactly how aggressively the background should be limited. We show examples of guides optimized using these constraints which demonstrates the higher signal to noise than conventional optimizations. Furthermore the parameter controlling neutron intake is explored which shows that the simulated optimal neutron intake is close to the analytically predicted, when assuming that the guide is dominated by multiple scattering events.

  3. Trajectory planning and optimal tracking for an industrial mobile robot

    NASA Astrophysics Data System (ADS)

    Hu, Huosheng; Brady, J. Michael; Probert, Penelope J.

    1994-02-01

    This paper introduces a unified approach to trajectory planning and tracking for an industrial mobile robot subject to non-holonomic constraints. We show (1) how a smooth trajectory is generated that takes into account the constraints from the dynamic environment and the robot kinematics; and (2) how a general predictive controller works to provide optimal tracking capability for nonlinear systems. The tracking performance of the proposed guidance system is analyzed by simulation.

  4. An integrated optimum design approach for high speed prop-rotors including acoustic constraints

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Wells, Valana; Mccarthy, Thomas; Han, Arris

    1993-01-01

    The objective of this research is to develop optimization procedures to provide design trends in high speed prop-rotors. The necessary disciplinary couplings are all considered within a closed loop multilevel decomposition optimization process. The procedures involve the consideration of blade-aeroelastic aerodynamic performance, structural-dynamic design requirements, and acoustics. Further, since the design involves consideration of several different objective functions, multiobjective function formulation techniques are developed.

  5. Optimal Sensor-Based Motion Planning for Autonomous Vehicle Teams

    DTIC Science & Technology

    2017-03-01

    calculated for non -dimensional ranges with Equation (3.26) and DU = 100 meters (shown at right) are equivalent to propagation loss calculated for 72 0 100...sensor and uniform target PDF, both choices are equivalent and the probability of non -detection equals the fraction of un- searched area. Time...feasible. Another goal is maximizing sensor performance in the presence of uncertainty. Optimal control provides a useful frame- work for solving these

  6. Illumination system development using design and analysis of computer experiments

    NASA Astrophysics Data System (ADS)

    Keresztes, Janos C.; De Ketelaere, Bart; Audenaert, Jan; Koshel, R. J.; Saeys, Wouter

    2015-09-01

    Computer assisted optimal illumination design is crucial when developing cost-effective machine vision systems. Standard local optimization methods, such as downhill simplex optimization (DHSO), often result in an optimal solution that is influenced by the starting point by converging to a local minimum, especially when dealing with high dimensional illumination designs or nonlinear merit spaces. This work presents a novel nonlinear optimization approach, based on design and analysis of computer experiments (DACE). The methodology is first illustrated with a 2D case study of four light sources symmetrically positioned along a fixed arc in order to obtain optimal irradiance uniformity on a flat Lambertian reflecting target at the arc center. The first step consists of choosing angular positions with no overlap between sources using a fast, flexible space filling design. Ray-tracing simulations are then performed at the design points and a merit function is used for each configuration to quantify the homogeneity of the irradiance at the target. The obtained homogeneities at the design points are further used as input to a Gaussian Process (GP), which develops a preliminary distribution for the expected merit space. Global optimization is then performed on the GP more likely providing optimal parameters. Next, the light positioning case study is further investigated by varying the radius of the arc, and by adding two spots symmetrically positioned along an arc diametrically opposed to the first one. The added value of using DACE with regard to the performance in convergence is 6 times faster than the standard simplex method for equal uniformity of 97%. The obtained results were successfully validated experimentally using a short-wavelength infrared (SWIR) hyperspectral imager monitoring a Spectralon panel illuminated by tungsten halogen sources with 10% of relative error.

  7. Capacity of noncoherent MFSK channels

    NASA Technical Reports Server (NTRS)

    Bar-David, I.; Butman, S. A.; Klass, M. J.; Levitt, B. K.; Lyon, R. F.

    1974-01-01

    Performance limits theoretically achievable over noncoherent channels perturbed by additive Gaussian noise in hard decision, optimal, and soft decision receivers are computed as functions of the number of orthogonal signals and the predetection signal-to-noise ratio. Equations are derived for orthogonal signal capacity, the ultimate MFSK capacity, and the convolutional coding and decoding limit. It is shown that performance improves as the signal-to-noise ratio increases, provided the bandwidth can be increased, that the optimum number of signals is not infinite (except for the optimal receiver), and that the optimum number decreases as the signal-to-noise ratio decreases, but is never less than 7 for even the hard decision receiver.

  8. Topology Optimization using the Level Set and eXtended Finite Element Methods: Theory and Applications

    NASA Astrophysics Data System (ADS)

    Villanueva Perez, Carlos Hernan

    Computational design optimization provides designers with automated techniques to develop novel and non-intuitive optimal designs. Topology optimization is a design optimization technique that allows for the evolution of a broad variety of geometries in the optimization process. Traditional density-based topology optimization methods often lack a sufficient resolution of the geometry and physical response, which prevents direct use of the optimized design in manufacturing and the accurate modeling of the physical response of boundary conditions. The goal of this thesis is to introduce a unified topology optimization framework that uses the Level Set Method (LSM) to describe the design geometry and the eXtended Finite Element Method (XFEM) to solve the governing equations and measure the performance of the design. The methodology is presented as an alternative to density-based optimization approaches, and is able to accommodate a broad range of engineering design problems. The framework presents state-of-the-art methods for immersed boundary techniques to stabilize the systems of equations and enforce the boundary conditions, and is studied with applications in 2D and 3D linear elastic structures, incompressible flow, and energy and species transport problems to test the robustness and the characteristics of the method. A comparison of the framework against density-based topology optimization approaches is studied with regards to convergence, performance, and the capability to manufacture the designs. Furthermore, the ability to control the shape of the design to operate within manufacturing constraints is developed and studied. The analysis capability of the framework is validated quantitatively through comparison against previous benchmark studies, and qualitatively through its application to topology optimization problems. The design optimization problems converge to intuitive designs and resembled well the results from previous 2D or density-based studies.

  9. Recall Performance for Content-Addressable Memory Using Adiabatic Quantum Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imam, Neena; Humble, Travis S.; McCaskey, Alex

    A content-addressable memory (CAM) stores key-value associations such that the key is recalled by providing its associated value. While CAM recall is traditionally performed using recurrent neural network models, we show how to solve this problem using adiabatic quantum optimization. Our approach maps the recurrent neural network to a commercially available quantum processing unit by taking advantage of the common underlying Ising spin model. We then assess the accuracy of the quantum processor to store key-value associations by quantifying recall performance against an ensemble of problem sets. We observe that different learning rules from the neural network community influence recallmore » accuracy but performance appears to be limited by potential noise in the processor. The strong connection established between quantum processors and neural network problems supports the growing intersection of these two ideas.« less

  10. NREL Evaluates Performance of Fast-Charge Electric Buses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-09-16

    This real-world performance evaluation is designed to enhance understanding of the overall usage and effectiveness of electric buses in transit operation and to provide unbiased technical information to other agencies interested in adding such vehicles to their fleets. Initial results indicate that the electric buses under study offer significant fuel and emissions savings. The final results will help Foothill Transit optimize the energy-saving potential of its transit fleet. NREL's performance evaluations help vehicle manufacturers fine-tune their designs and help fleet managers select fuel-efficient, low-emission vehicles that meet their bottom line and operational goals. help Foothill Transit optimize the energy-saving potentialmore » of its transit fleet. NREL's performance evaluations help vehicle manufacturers fine-tune their designs and help fleet managers select fuel-efficient, low-emission vehicles that meet their bottom line and operational goals.« less

  11. Optimization of armored fighting vehicle crew performance in a net-centric battlefield

    NASA Astrophysics Data System (ADS)

    McKeen, William P.; Espenant, Mark

    2002-08-01

    Traditional display, control and situational awareness technologies may not allow the fighting vehicle commander to take full advantage of the rich data environment made available in the net-centric battle field of the future. Indeed, the sheer complexity and volume of available data, if not properly managed, may actually reduce crew performance by overloading or confusing the commander with irrelevant information. New techniques must be explored to understand how to present battlefield information and provide the commander with continuous high quality situational awareness without significant cognitive overhead. Control of the vehicle's many complex systems must also be addressed the entire Soldier Machine Interface must be optimized if we are to realize the potential performance improvements. Defence Research and Development Canada (DRDC) and General Dynamics Canada Ltd. have embarked on a joint program called Future Armoured Fighting Vehicle Systems Technology Demonstrator, to explore these issues. The project is based on man-in-the-loop experimentation using virtual reality technology on a six degree-of-freedom motion platform that simulates the motion, sights and sounds inside a future armoured vehicle. The vehicle commander is provided with a virtual reality vision system to view a simulated 360 degree multi-spectrum representation of the battlespace, thus providing enhanced situational awareness. Graphic overlays with decision aid information will be added to reduce cognitive loading. Experiments will be conducted to evaluate the effectiveness of virtual control systems. The simulations are carried out in a virtual battlefield created by linking our simulation system with other simulation centers to provide a net-centric battlespace where enemy forces can be engaged in fire fights. Survivability and lethality will be measured in successive test sequences using real armoured fighting vehicle crews to optimize overall system effectiveness.

  12. Continuous intensity map optimization (CIMO): A novel approach to leaf sequencing in step and shoot IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao Daliang; Earl, Matthew A.; Luan, Shuang

    2006-04-15

    A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less

  13. Variability-aware double-patterning layout optimization for analog circuits

    NASA Astrophysics Data System (ADS)

    Li, Yongfu; Perez, Valerio; Tripathi, Vikas; Lee, Zhao Chuan; Tseng, I.-Lun; Ong, Jonathan Yoong Seang

    2018-03-01

    The semiconductor industry has adopted multi-patterning techniques to manage the delay in the extreme ultraviolet lithography technology. During the design process of double-patterning lithography layout masks, two polygons are assigned to different masks if their spacing is less than the minimum printable spacing. With these additional design constraints, it is very difficult to find experienced layout-design engineers who have a good understanding of the circuit to manually optimize the mask layers in order to minimize color-induced circuit variations. In this work, we investigate the impact of double-patterning lithography on analog circuits and provide quantitative analysis for our designers to select the optimal mask to minimize the circuit's mismatch. To overcome the problem and improve the turn-around time, we proposed our smart "anchoring" placement technique to optimize mask decomposition for analog circuits. We have developed a software prototype that is capable of providing anchoring markers in the layout, allowing industry standard tools to perform automated color decomposition process.

  14. Optimization of constrained density functional theory

    NASA Astrophysics Data System (ADS)

    O'Regan, David D.; Teobaldi, Gilberto

    2016-07-01

    Constrained density functional theory (cDFT) is a versatile electronic structure method that enables ground-state calculations to be performed subject to physical constraints. It thereby broadens their applicability and utility. Automated Lagrange multiplier optimization is necessary for multiple constraints to be applied efficiently in cDFT, for it to be used in tandem with geometry optimization, or with molecular dynamics. In order to facilitate this, we comprehensively develop the connection between cDFT energy derivatives and response functions, providing a rigorous assessment of the uniqueness and character of cDFT stationary points while accounting for electronic interactions and screening. In particular, we provide a nonperturbative proof that stable stationary points of linear density constraints occur only at energy maxima with respect to their Lagrange multipliers. We show that multiple solutions, hysteresis, and energy discontinuities may occur in cDFT. Expressions are derived, in terms of convenient by-products of cDFT optimization, for quantities such as the dielectric function and a condition number quantifying ill definition in multiple constraint cDFT.

  15. Testing and Optimizing a Stove-Powered Thermoelectric Generator with Fan Cooling.

    PubMed

    Zheng, Youqu; Hu, Jiangen; Li, Guoneng; Zhu, Lingyun; Guo, Wenwen

    2018-06-07

    In order to provide heat and electricity under emergency conditions in off-grid areas, a stove-powered thermoelectric generator (STEG) was designed and optimized. No battery was incorporated, ensuring it would work anytime, anywhere, as long as combustible materials were provided. The startup performance, power load feature and thermoelectric (TE) efficiency were investigated in detail. Furthermore, the heat-conducting plate thickness, cooling fan selection, heat sink dimension and TE module configuration were optimized. The heat flow method was employed to determine the TE efficiency, which was compared to the predicted data. Results showed that the STEG can supply clean-and-warm air (625 W) and electricity (8.25 W at 5 V) continuously at a temperature difference of 148 °C, and the corresponding TE efficiency was measured to be 2.31%. Optimization showed that the choice of heat-conducting plate thickness, heat sink dimensions and cooling fan were inter-dependent, and the TE module configuration affected both the startup process and the power output.

  16. Performance of a Limiting-Antigen Avidity Enzyme Immunoassay for Cross-Sectional Estimation of HIV Incidence in the United States

    PubMed Central

    Konikoff, Jacob; Brookmeyer, Ron; Longosz, Andrew F.; Cousins, Matthew M.; Celum, Connie; Buchbinder, Susan P.; Seage, George R.; Kirk, Gregory D.; Moore, Richard D.; Mehta, Shruti H.; Margolick, Joseph B.; Brown, Joelle; Mayer, Kenneth H.; Koblin, Beryl A.; Justman, Jessica E.; Hodder, Sally L.; Quinn, Thomas C.; Eshleman, Susan H.; Laeyendecker, Oliver

    2013-01-01

    Background A limiting antigen avidity enzyme immunoassay (HIV-1 LAg-Avidity assay) was recently developed for cross-sectional HIV incidence estimation. We evaluated the performance of the LAg-Avidity assay alone and in multi-assay algorithms (MAAs) that included other biomarkers. Methods and Findings Performance of testing algorithms was evaluated using 2,282 samples from individuals in the United States collected 1 month to >8 years after HIV seroconversion. The capacity of selected testing algorithms to accurately estimate incidence was evaluated in three longitudinal cohorts. When used in a single-assay format, the LAg-Avidity assay classified some individuals infected >5 years as assay positive and failed to provide reliable incidence estimates in cohorts that included individuals with long-term infections. We evaluated >500,000 testing algorithms, that included the LAg-Avidity assay alone and MAAs with other biomarkers (BED capture immunoassay [BED-CEIA], BioRad-Avidity assay, HIV viral load, CD4 cell count), varying the assays and assay cutoffs. We identified an optimized 2-assay MAA that included the LAg-Avidity and BioRad-Avidity assays, and an optimized 4-assay MAA that included those assays, as well as HIV viral load and CD4 cell count. The two optimized MAAs classified all 845 samples from individuals infected >5 years as MAA negative and estimated incidence within a year of sample collection. These two MAAs produced incidence estimates that were consistent with those from longitudinal follow-up of cohorts. A comparison of the laboratory assay costs of the MAAs was also performed, and we found that the costs associated with the optimal two assay MAA were substantially less than with the four assay MAA. Conclusions The LAg-Avidity assay did not perform well in a single-assay format, regardless of the assay cutoff. MAAs that include the LAg-Avidity and BioRad-Avidity assays, with or without viral load and CD4 cell count, provide accurate incidence estimates. PMID:24386116

  17. Evolutionary Optimization of a Quadrifilar Helical Antenna

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Kraus, William F.; Linden, Derek S.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Automated antenna synthesis via evolutionary design has recently garnered much attention in the research literature. Evolutionary algorithms show promise because, among search algorithms, they are able to effectively search large, unknown design spaces. NASA's Mars Odyssey spacecraft is due to reach final Martian orbit insertion in January, 2002. Onboard the spacecraft is a quadrifilar helical antenna that provides telecommunications in the UHF band with landed assets, such as robotic rovers. Each helix is driven by the same signal which is phase-delayed in 90 deg increments. A small ground plane is provided at the base. It is designed to operate in the frequency band of 400-438 MHz. Based on encouraging previous results in automated antenna design using evolutionary search, we wanted to see whether such techniques could improve upon Mars Odyssey antenna design. Specifically, a co-evolutionary genetic algorithm is applied to optimize the gain and size of the quadrifilar helical antenna. The optimization was performed in-situ in the presence of a neighboring spacecraft structure. On the spacecraft, a large aluminum fuel tank is adjacent to the antenna. Since this fuel tank can dramatically affect the antenna's performance, we leave it to the evolutionary process to see if it can exploit the fuel tank's properties advantageously. Optimizing in the presence of surrounding structures would be quite difficult for human antenna designers, and thus the actual antenna was designed for free space (with a small ground plane). In fact, when flying on the spacecraft, surrounding structures that are moveable (e.g., solar panels) may be moved during the mission in order to improve the antenna's performance.

  18. Retention prediction and separation optimization under multilinear gradient elution in liquid chromatography with Microsoft Excel macros.

    PubMed

    Fasoula, S; Zisi, Ch; Gika, H; Pappa-Louisi, A; Nikitas, P

    2015-05-22

    A package of Excel VBA macros have been developed for modeling multilinear gradient retention data obtained in single or double gradient elution mode by changing organic modifier(s) content and/or eluent pH. For this purpose, ten chromatographic models were used and four methods were adopted for their application. The methods were based on (a) the analytical expression of the retention time, provided that this expression is available, (b) the retention times estimated using the Nikitas-Pappa approach, (c) the stepwise approximation, and (d) a simple numerical approximation involving the trapezoid rule for integration of the fundamental equation for gradient elution. For all these methods, Excel VBA macros have been written and implemented using two different platforms; the fitting and the optimization platform. The fitting platform calculates not only the adjustable parameters of the chromatographic models, but also the significance of these parameters and furthermore predicts the analyte elution times. The optimization platform determines the gradient conditions that lead to the optimum separation of a mixture of analytes by using the Solver evolutionary mode, provided that proper constraints are set in order to obtain the optimum gradient profile in the minimum gradient time. The performance of the two platforms was tested using experimental and artificial data. It was found that using the proposed spreadsheets, fitting, prediction, and optimization can be performed easily and effectively under all conditions. Overall, the best performance is exhibited by the analytical and Nikitas-Pappa's methods, although the former cannot be used under all circumstances. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. SU-E-T-270: Optimized Shielding Calculations for Medical Linear Accelerators (LINACs).

    PubMed

    Muhammad, W; Lee, S; Hussain, A

    2012-06-01

    The purpose of radiation shielding is to reduce the effective equivalent dose from a medical linear accelerator (LINAC) to a point outside the room to a level determined by individual state/international regulations. The study was performed to design LINAC's room for newly planned radiotherapy centers. Optimized shielding calculations were performed for LINACs having maximum photon energy of 20 MV based on NCRP 151. The maximum permissible dose limits were kept 0.04 mSv/week and 0.002 mSv/week for controlled and uncontrolled areas respectively by following ALARA principle. The planned LINAC's room was compared to the already constructed (non-optimized) LINAC's room to evaluate the shielding costs and the other facilities those are directly related to the room design. In the evaluation process it was noted that the non-optimized room size (i.e., 610 × 610 cm 2 or 20 feet × 20 feet) is not suitable for total body irradiation (TBI) although the machine installed inside was having not only the facility of TBI but the license was acquired. By keeping this point in view, the optimized INAC's room size was kept 762 × 762 cm 2. Although, the area of the optimized rooms was greater than the non-planned room (i.e., 762 × 762 cm 2 instead of 610 × 610 cm 2), the shielding cost for the optimized LINAC's rooms was reduced by 15%. When optimized shielding calculations were re-performed for non-optimized shielding room (i.e., keeping room size, occupancy factors, workload etc. same), it was found that the shielding cost may be lower to 41 %. In conclusion, non- optimized LINAC's room can not only put extra financial burden on the hospital but also can cause of some serious issues related to providing health care facilities for patients. © 2012 American Association of Physicists in Medicine.

  20. Optical performance of random anti-reflection structured surfaces (rARSS) on spherical lenses

    NASA Astrophysics Data System (ADS)

    Taylor, Courtney D.

    Random anti-reflection structured surfaces (rARSS) have been reported to improve transmittance of optical-grade fused silica planar substrates to values greater than 99%. These textures are fabricated directly on the substrates using reactive-ion/inductively-coupled plasma etching (RIE/ICP) techniques, and often result in transmitted spectra with no measurable interference effects (fringes) for a wide range of wavelengths. The RIE/ICP processes used in the fabrication process to etch the rARSS is anisotropic and thus well suited for planar components. The improvement in spectral transmission has been found to be independent of optical incidence angles for values from 0° to +/-30°. Qualifying and quantifying the rARSS performance on curved substrates, such as convex lenses, is required to optimize the fabrication of the desired AR effect on optical-power elements. In this work, rARSS was fabricated on fused silica plano-convex (PCX) and plano-concave (PCV) lenses using a planar-substrate optimized RIE process to maximize optical transmission in the range from 500 to 1100 nm. An additional set of lenses were etched in a non-optimized ICP process to provide additional comparisons. Results are presented from optical transmission and beam propagation tests (optimized lenses only) of rARSS lenses for both TE and TM incident polarizations at a wavelength of 633 nm and over a 70° full field of view in both singlet and doublet configurations. These results suggest optimization of the fabrication process is not required, mainly due to the wide angle-of-incidence AR tolerance performance of the rARSS lenses. Non-optimized recipe lenses showed low transmission enhancement, and confirmed the need to optimized etch recipes prior to process transfer of PCX/PCV lenses. Beam propagation tests indicated no major beam degradation through the optimized lens elements. Scanning electron microscopy (SEM) images confirmed different structure between optimized and non-optimized samples. SEM images also indicated isotropically-oriented surface structures on both types of lenses.

  1. A multiple objective optimization approach to quality control

    NASA Technical Reports Server (NTRS)

    Seaman, Christopher Michael

    1991-01-01

    The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios: tuning of process controllers to meet specified performance objectives and tuning of process inputs to meet specified quality objectives. Five case studies are presented.

  2. A modular approach to large-scale design optimization of aerospace systems

    NASA Astrophysics Data System (ADS)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft components, providing differentiability. An unstructured quadrilateral mesh generation algorithm is also developed to automate the creation of detailed meshes for aircraft structures, and a mesh convergence study is performed to verify that the quality of the mesh is maintained as it is refined. As a demonstration, high-fidelity aerostructural analysis is performed for two unconventional configurations with detailed structures included, and aerodynamic shape optimization is applied to the truss-braced wing, which finds and eliminates a shock in the region bounded by the struts and the wing.

  3. [Not Available].

    PubMed

    Mokeddem, Diab; Khellaf, Abdelhafid

    2009-01-01

    Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples.

  4. The effect of nanoparticle packing on capacitive electrode performance.

    PubMed

    Lee, Younghee; Noh, Seonmyeong; Kim, Min-Sik; Kong, Hye Jeong; Im, Kyungun; Kwon, Oh Seok; Kim, Sungmin; Yoon, Hyeonseok

    2016-06-09

    Nanoparticles pack together to form macro-scale electrodes in various types of devices, and thus, optimization of the nanoparticle packing is a prerequisite for the realization of a desirable device performance. In this work, we provide in-depth insight into the effect of nanoparticle packing on the performance of nanoparticle-based electrodes by combining experimental and computational findings. As a model system, polypyrrole nanospheres of three different diameters were used to construct pseudocapacitive electrodes, and the performance of the electrodes was examined at various nanosphere diameter ratios and mixed weight fractions. Two numerical algorithms are proposed to simulate the random packing of the nanospheres on the electrode. The binary nanospheres exhibited diverse, complicated packing behaviors compared with the monophasic packing of each nanosphere species. The packing of the two nanosphere species with lower diameter ratios at an optimized composition could lead to more dense packing of the nanospheres, which in turn could contribute to better device performance. The dense packing of the nanospheres would provide more efficient transport pathways for ions because of the reduced inter-nanosphere pore size and enlarged surface area for charge storage. Ultimately, it is anticipated that our approach can be widely used to define the concept of "the best nanoparticle packing" for desirable device performance.

  5. Integrated modeling environment for systems-level performance analysis of the Next-Generation Space Telescope

    NASA Astrophysics Data System (ADS)

    Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry

    1998-08-01

    All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.

  6. Development of performance properties of ternary mixtures : field demonstrations and project summary.

    DOT National Transportation Integrated Search

    2012-07-01

    Supplementary cementitious materials (SCM) have become common parts of modern concrete practice. The blending of two or three : cementitious materials to optimize durability, strength, or economics provides owners, engineers, materials suppliers, and...

  7. THERMAL DEPOLYMERIZATION OF POSTCONSUMER PLASTICS

    EPA Science Inventory

    The University of North Dakota Energy & Environmental Research Center (EERC) performed two series of tests to evaluate process conditions for thermal depolymerization of postconsumer plastics. The objective of the first test series was to provide data for optimization of reactio...

  8. Closed-loop control of anesthesia: a primer for anesthesiologists.

    PubMed

    Dumont, Guy A; Ansermino, J Mark

    2013-11-01

    Feedback control is ubiquitous in nature and engineering and has revolutionized safety in fields from space travel to the automobile. In anesthesia, automated feedback control holds the promise of limiting the effects on performance of individual patient variability, optimizing the workload of the anesthesiologist, increasing the time spent in a more desirable clinical state, and ultimately improving the safety and quality of anesthesia care. The benefits of control systems will not be realized without widespread support from the health care team in close collaboration with industrial partners. In this review, we provide an introduction to the established field of control systems research for the everyday anesthesiologist. We introduce important concepts such as feedback and modeling specific to control problems and provide insight into design requirements for guaranteeing the safety and performance of feedback control systems. We focus our discussion on the optimization of anesthetic drug administration.

  9. Model-Free Adaptive Control for Unknown Nonlinear Zero-Sum Differential Game.

    PubMed

    Zhong, Xiangnan; He, Haibo; Wang, Ding; Ni, Zhen

    2018-05-01

    In this paper, we present a new model-free globalized dual heuristic dynamic programming (GDHP) approach for the discrete-time nonlinear zero-sum game problems. First, the online learning algorithm is proposed based on the GDHP method to solve the Hamilton-Jacobi-Isaacs equation associated with optimal regulation control problem. By setting backward one step of the definition of performance index, the requirement of system dynamics, or an identifier is relaxed in the proposed method. Then, three neural networks are established to approximate the optimal saddle point feedback control law, the disturbance law, and the performance index, respectively. The explicit updating rules for these three neural networks are provided based on the data generated during the online learning along the system trajectories. The stability analysis in terms of the neural network approximation errors is discussed based on the Lyapunov approach. Finally, two simulation examples are provided to show the effectiveness of the proposed method.

  10. Status of Low Thrust Work at JSC

    NASA Technical Reports Server (NTRS)

    Condon, Gerald L.

    2004-01-01

    High performance low thrust (solar electric, nuclear electric, variable specific impulse magnetoplasma rocket) propulsion offers a significant benefit to NASA missions beyond low Earth orbit. As NASA (e.g., Prometheus Project) endeavors to develop these propulsion systems and associated power supplies, it becomes necessary to develop a refined trajectory design capability that will allow engineers to develop future robotic and human mission designs that take advantage of this new technology. This ongoing work addresses development of a trajectory design and optimization tool for assessing low thrust (and other types) trajectories. This work targets to advance the state of the art, enable future NASA missions, enable science drivers, and enhance education. This presentation provides a summary of the low thrust-related JSC activities under the ISP program and specifically, provides a look at a new release of a multi-gravity, multispacecraft trajectory optimization tool (Copernicus) along with analysis performed using this tool over the past year.

  11. Strong scaling of general-purpose molecular dynamics simulations on GPUs

    NASA Astrophysics Data System (ADS)

    Glaser, Jens; Nguyen, Trung Dac; Anderson, Joshua A.; Lui, Pak; Spiga, Filippo; Millan, Jaime A.; Morse, David C.; Glotzer, Sharon C.

    2015-07-01

    We describe a highly optimized implementation of MPI domain decomposition in a GPU-enabled, general-purpose molecular dynamics code, HOOMD-blue (Anderson and Glotzer, 2013). Our approach is inspired by a traditional CPU-based code, LAMMPS (Plimpton, 1995), but is implemented within a code that was designed for execution on GPUs from the start (Anderson et al., 2008). The software supports short-ranged pair force and bond force fields and achieves optimal GPU performance using an autotuning algorithm. We are able to demonstrate equivalent or superior scaling on up to 3375 GPUs in Lennard-Jones and dissipative particle dynamics (DPD) simulations of up to 108 million particles. GPUDirect RDMA capabilities in recent GPU generations provide better performance in full double precision calculations. For a representative polymer physics application, HOOMD-blue 1.0 provides an effective GPU vs. CPU node speed-up of 12.5 ×.

  12. Link performance optimization for digital satellite broadcasting systems

    NASA Astrophysics Data System (ADS)

    de Gaudenzi, R.; Elia, C.; Viola, R.

    The authors introduce the concept of digital direct satellite broadcasting (D-DBS), which allows unprecedented flexibility by providing a large number of audiovisual services. The concept assumes an information rate of 40 Mb/s, which is compatible with practically all present-day transponders. After discussion of the general system concept, the results of transmission system optimization are presented. Channel and interference effects are taken into account. Numerical results show that the scheme with the best performance is trellis-coded 8-PSK (phase shift keying) modulation concatenated with Reed-Solomon block code. For a net data rate of 40 Mb/s a bit error rate of 10-10 can be achieved with an equivalent bit energy to noise density of 9.5 dB, including channel, interference, and demodulator impairments. A link budget analysis shows how a medium-power direct-to-home TV satellite can provide multimedia services to users equipped with small (60-cm) dish antennas.

  13. Optimal Coordination of Building Loads and Energy Storage for Power Grid and End User Services

    DOE PAGES

    Hao, He; Wu, Di; Lian, Jianming; ...

    2017-01-18

    Demand response and energy storage play a profound role in the smart grid. The focus of this study is to evaluate benefits of coordinating flexible loads and energy storage to provide power grid and end user services. We present a Generalized Battery Model (GBM) to describe the flexibility of building loads and energy storage. An optimization-based approach is proposed to characterize the parameters (power and energy limits) of the GBM for flexible building loads. We then develop optimal coordination algorithms to provide power grid and end user services such as energy arbitrage, frequency regulation, spinning reserve, as well as energymore » cost and demand charge reduction. Several case studies have been performed to demonstrate the efficacy of the GBM and coordination algorithms, and evaluate the benefits of using their flexibility for power grid and end user services. We show that optimal coordination yields significant cost savings and revenue. Moreover, the best option for power grid services is to provide energy arbitrage and frequency regulation. Finally and furthermore, when coordinating flexible loads with energy storage to provide end user services, it is recommended to consider demand charge in addition to time-of-use price in order to flatten the aggregate power profile.« less

  14. Optimized Structures for Low-Profile Phase Change Thermal Spreaders

    NASA Astrophysics Data System (ADS)

    Sharratt, Stephen Andrew

    Thin, low-profile phase change thermal spreaders can provide cooling solutions for some of today's most pressing heat flux dissipation issues. These thermal issues are only expected to increase as future electronic circuitry requirements lead to denser and potentially 3D chip packaging. Phase change based heat spreaders, such as heat pipes or vapor chambers, can provide a practical solution for effectively dissipating large heat fluxes. This thesis reports a comprehensive study of state-of-the-art capillary pumped wick structures using computational modeling, micro wick fabrication, and experimental analysis. Modeling efforts focus on predicting the shape of the liquid meniscus inside a complicated 3D wick structure. It is shown that this liquid shape can drastically affect the wick's thermal resistance. In addition, knowledge of the liquid meniscus shape allows for the computation of key parameters such as permeability and capillary pressure which are necessary for predicting the maximum heat flux. After the model is validated by comparison to experimental results, the wick structure is optimized so as to decrease overall wick thermal resistance and increase the maximum capillary limited heat flux before dryout. The optimized structures are then fabricated out of both silicon and copper using both traditional and novel micro-fabrication techniques. The wicks are made super-hydrophilic using chemical and thermal oxidation schemes. A sintered monolayer of Cu particles is fabricated and analyzed as well. The fabricated wick structures are experimentally tested for their heat transfer performance inside a well controlled copper vacuum chamber. Heat fluxes as high as 170 W/cm2 are realized for Cu wicks with structure heights of 100 μm. The structures optimized for both minimized thermal resistance and high liquid supply ability perform much better than their non-optimized counterparts. The super-hydrophilic oxidation scheme is found to drastically increase the maximum heat flux and decrease thermal resistance. This research provides key insights as to how to optimize heat pipe structures to minimize thermal resistance and increase maximum heat flux. These thin wick structures can also be combined with a thicker liquid supply layer so that thin, low-resistance evaporator layers can be constructed and higher heat fluxes realized. The work presented in this thesis can be used to aid in the development of high-performance phase change thermal spreaders, allowing for temperature control of a variety of powerful electronic components.

  15. Optimizing Cloud Based Image Storage, Dissemination and Processing Through Use of Mrf and Lerc

    NASA Astrophysics Data System (ADS)

    Becker, Peter; Plesea, Lucian; Maurer, Thomas

    2016-06-01

    The volume and numbers of geospatial images being collected continue to increase exponentially with the ever increasing number of airborne and satellite imaging platforms, and the increasing rate of data collection. As a result, the cost of fast storage required to provide access to the imagery is a major cost factor in enterprise image management solutions to handle, process and disseminate the imagery and information extracted from the imagery. Cloud based object storage offers to provide significantly lower cost and elastic storage for this imagery, but also adds some disadvantages in terms of greater latency for data access and lack of traditional file access. Although traditional file formats geoTIF, JPEG2000 and NITF can be downloaded from such object storage, their structure and available compression are not optimum and access performance is curtailed. This paper provides details on a solution by utilizing a new open image formats for storage and access to geospatial imagery optimized for cloud storage and processing. MRF (Meta Raster Format) is optimized for large collections of scenes such as those acquired from optical sensors. The format enables optimized data access from cloud storage, along with the use of new compression options which cannot easily be added to existing formats. The paper also provides an overview of LERC a new image compression that can be used with MRF that provides very good lossless and controlled lossy compression.

  16. Coordinated Optimization of Distributed Energy Resources and Smart Loads in Distribution Systems: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Rui; Zhang, Yingchen

    2016-08-01

    Distributed energy resources (DERs) and smart loads have the potential to provide flexibility to the distribution system operation. A coordinated optimization approach is proposed in this paper to actively manage DERs and smart loads in distribution systems to achieve the optimal operation status. A three-phase unbalanced Optimal Power Flow (OPF) problem is developed to determine the output from DERs and smart loads with respect to the system operator's control objective. This paper focuses on coordinating PV systems and smart loads to improve the overall voltage profile in distribution systems. Simulations have been carried out in a 12-bus distribution feeder andmore » results illustrate the superior control performance of the proposed approach.« less

  17. Integrated solar energy system optimization

    NASA Astrophysics Data System (ADS)

    Young, S. K.

    1982-11-01

    The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.

  18. Coordinated Optimization of Distributed Energy Resources and Smart Loads in Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Rui; Zhang, Yingchen

    2016-11-14

    Distributed energy resources (DERs) and smart loads have the potential to provide flexibility to the distribution system operation. A coordinated optimization approach is proposed in this paper to actively manage DERs and smart loads in distribution systems to achieve the optimal operation status. A three-phase unbalanced Optimal Power Flow (OPF) problem is developed to determine the output from DERs and smart loads with respect to the system operator's control objective. This paper focuses on coordinating PV systems and smart loads to improve the overall voltage profile in distribution systems. Simulations have been carried out in a 12-bus distribution feeder andmore » results illustrate the superior control performance of the proposed approach.« less

  19. Robust Control of Uncertain Systems via Dissipative LQG-Type Controllers

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.

    2000-01-01

    Optimal controller design is addressed for a class of linear, time-invariant systems which are dissipative with respect to a quadratic power function. The system matrices are assumed to be affine functions of uncertain parameters confined to a convex polytopic region in the parameter space. For such systems, a method is developed for designing a controller which is dissipative with respect to a given power function, and is simultaneously optimal in the linear-quadratic-Gaussian (LQG) sense. The resulting controller provides robust stability as well as optimal performance. Three important special cases, namely, passive, norm-bounded, and sector-bounded controllers, which are also LQG-optimal, are presented. The results give new methods for robust controller design in the presence of parametric uncertainties.

  20. Joint optimization of source, mask, and pupil in optical lithography

    NASA Astrophysics Data System (ADS)

    Li, Jia; Lam, Edmund Y.

    2014-03-01

    Mask topography effects need to be taken into consideration for more advanced resolution enhancement techniques in optical lithography. However, rigorous 3D mask model achieves high accuracy at a large computational cost. This work develops a combined source, mask and pupil optimization (SMPO) approach by taking advantage of the fact that pupil phase manipulation is capable of partially compensating for mask topography effects. We first design the pupil wavefront function by incorporating primary and secondary spherical aberration through the coefficients of the Zernike polynomials, and achieve optimal source-mask pair under the condition of aberrated pupil. Evaluations against conventional source mask optimization (SMO) without incorporating pupil aberrations show that SMPO provides improved performance in terms of pattern fidelity and process window sizes.

  1. CMS-Wave

    DTIC Science & Technology

    2014-10-27

    a phase-averaged spectral wind-wave generation and transformation model and its interface in the Surface-water Modeling System (SMS). Ambrose...applications of the Boussinesq (BOUSS-2D) wave model that provides more rigorous calculations for design and performance optimization of integrated...navigation systems . Together these wave models provide reliable predictions on regional and local spatial domains and cost-effective engineering solutions

  2. Novel operation and control of an electric vehicle aluminum/air battery system

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Yang, Shao Hua; Knickle, Harold

    The objective of this paper is to create a method to size battery subsystems for an electric vehicle to optimize battery performance. Optimization of performance includes minimizing corrosion by operating at a constant current density. These subsystems will allow for easy mechanical recharging. A proper choice of battery subsystem will allow for longer battery life, greater range and performance. For longer life, the current density and reaction rate should be nearly constant. The control method requires control of power by controlling electrolyte flow in battery sub modules. As power is increased more sub modules come on line and more electrolyte is needed. Solenoid valves open in a sequence to provide the required power. Corrosion is limited because there is no electrolyte in the modules not being used.

  3. Simulation-based process windows simultaneously considering two and three conflicting criteria in injection molding

    PubMed Central

    Rodríguez-Yáñez, Alicia Berenice; Méndez-Vázquez, Yaileen

    2014-01-01

    Process windows in injection molding are habitually built with only one performance measure in mind. In reality, a more realistic picture can be obtained when considering multiple performance measures at a time, especially in the presence of conflict. In this work, the construction of process windows for injection molding (IM) is undertaken considering two and three performance measures in conflict simultaneously. The best compromises between the criteria involved are identified through the direct application of the concept of Pareto-dominance in multiple criteria optimization. The aim is to provide a formal and realistic strategy to set processing conditions in IM operations. The resulting optimization approach is easily implementable in MS Excel. The solutions are presented graphically to facilitate their use in manufacturing plants. PMID:25530927

  4. Simulation-based process windows simultaneously considering two and three conflicting criteria in injection molding.

    PubMed

    Rodríguez-Yáñez, Alicia Berenice; Méndez-Vázquez, Yaileen; Cabrera-Ríos, Mauricio

    2014-01-01

    Process windows in injection molding are habitually built with only one performance measure in mind. In reality, a more realistic picture can be obtained when considering multiple performance measures at a time, especially in the presence of conflict. In this work, the construction of process windows for injection molding (IM) is undertaken considering two and three performance measures in conflict simultaneously. The best compromises between the criteria involved are identified through the direct application of the concept of Pareto-dominance in multiple criteria optimization. The aim is to provide a formal and realistic strategy to set processing conditions in IM operations. The resulting optimization approach is easily implementable in MS Excel. The solutions are presented graphically to facilitate their use in manufacturing plants.

  5. Optimization of a Biometric System Based on Acoustic Images

    PubMed Central

    Izquierdo Fuente, Alberto; Del Val Puente, Lara; Villacorta Calvo, Juan J.; Raboso Mateos, Mariano

    2014-01-01

    On the basis of an acoustic biometric system that captures 16 acoustic images of a person for 4 frequencies and 4 positions, a study was carried out to improve the performance of the system. On a first stage, an analysis to determine which images provide more information to the system was carried out showing that a set of 12 images allows the system to obtain results that are equivalent to using all of the 16 images. Finally, optimization techniques were used to obtain the set of weights associated with each acoustic image that maximizes the performance of the biometric system. These results improve significantly the performance of the preliminary system, while reducing the time of acquisition and computational burden, since the number of acoustic images was reduced. PMID:24616643

  6. Computational multiobjective topology optimization of silicon anode structures for lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Mitchell, Sarah L.; Ortiz, Michael

    2016-09-01

    This study utilizes computational topology optimization methods for the systematic design of optimal multifunctional silicon anode structures for lithium-ion batteries. In order to develop next generation high performance lithium-ion batteries, key design challenges relating to the silicon anode structure must be addressed, namely the lithiation-induced mechanical degradation and the low intrinsic electrical conductivity of silicon. As such this work considers two design objectives, the first being minimum compliance under design dependent volume expansion, and the second maximum electrical conduction through the structure, both of which are subject to a constraint on material volume. Density-based topology optimization methods are employed in conjunction with regularization techniques, a continuation scheme, and mathematical programming methods. The objectives are first considered individually, during which the influence of the minimum structural feature size and prescribed volume fraction are investigated. The methodology is subsequently extended to a bi-objective formulation to simultaneously address both the structural and conduction design criteria. The weighted sum method is used to derive the Pareto fronts, which demonstrate a clear trade-off between the competing design objectives. A rigid frame structure was found to be an excellent compromise between the structural and conduction design criteria, providing both the required structural rigidity and direct conduction pathways. The developments and results presented in this work provide a foundation for the informed design and development of silicon anode structures for high performance lithium-ion batteries.

  7. Space Launch System Mission Flexibility Assessment

    NASA Technical Reports Server (NTRS)

    Monk, Timothy; Holladay, Jon; Sanders, Terry; Hampton, Bryan

    2012-01-01

    The Space Launch System (SLS) is envisioned as a heavy lift vehicle that will provide the foundation for future beyond low Earth orbit (LEO) missions. While multiple assessments have been performed to determine the optimal configuration for the SLS, this effort was undertaken to evaluate the flexibility of various concepts for the range of missions that may be required of this system. These mission scenarios include single launch crew and/or cargo delivery to LEO, single launch cargo delivery missions to LEO in support of multi-launch mission campaigns, and single launch beyond LEO missions. Specifically, we assessed options for the single launch beyond LEO mission scenario using a variety of in-space stages and vehicle staging criteria. This was performed to determine the most flexible (and perhaps optimal) method of designing this particular type of mission. A specific mission opportunity to the Jovian system was further assessed to determine potential solutions that may meet currently envisioned mission objectives. This application sought to significantly reduce mission cost by allowing for a direct, faster transfer from Earth to Jupiter and to determine the order-of-magnitude mass margin that would be made available from utilization of the SLS. In general, smaller, existing stages provided comparable performance to larger, new stage developments when the mission scenario allowed for optimal LEO dropoff orbits (e.g. highly elliptical staging orbits). Initial results using this method with early SLS configurations and existing Upper Stages showed the potential of capturing Lunar flyby missions as well as providing significant mass delivery to a Jupiter transfer orbit.

  8. Blanket design and optimization demonstrations of the first wall/blanket/shield design and optimization system (BSDOS).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Y.; Nuclear Engineering Division

    2005-05-01

    In fusion reactors, the blanket design and its characteristics have a major impact on the reactor performance, size, and economics. The selection and arrangement of the blanket materials, dimensions of the different blanket zones, and different requirements of the selected materials for a satisfactory performance are the main parameters, which define the blanket performance. These parameters translate to a large number of variables and design constraints, which need to be simultaneously considered in the blanket design process. This represents a major design challenge because of the lack of a comprehensive design tool capable of considering all these variables to definemore » the optimum blanket design and satisfying all the design constraints for the adopted figure of merit and the blanket design criteria. The blanket design capabilities of the First Wall/Blanket/Shield Design and Optimization System (BSDOS) have been developed to overcome this difficulty and to provide the state-of-the-art research and design tool for performing blanket design analyses. This paper describes some of the BSDOS capabilities and demonstrates its use. In addition, the use of the optimization capability of the BSDOS can result in a significant blanket performance enhancement and cost saving for the reactor design under consideration. In this paper, examples are presented, which utilize an earlier version of the ITER solid breeder blanket design and a high power density self-cooled lithium blanket design for demonstrating some of the BSDOS blanket design capabilities.« less

  9. Blanket Design and Optimization Demonstrations of the First Wall/Blanket/Shield Design and Optimization System (BSDOS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Yousry

    2005-05-15

    In fusion reactors, the blanket design and its characteristics have a major impact on the reactor performance, size, and economics. The selection and arrangement of the blanket materials, dimensions of the different blanket zones, and different requirements of the selected materials for a satisfactory performance are the main parameters, which define the blanket performance. These parameters translate to a large number of variables and design constraints, which need to be simultaneously considered in the blanket design process. This represents a major design challenge because of the lack of a comprehensive design tool capable of considering all these variables to definemore » the optimum blanket design and satisfying all the design constraints for the adopted figure of merit and the blanket design criteria. The blanket design capabilities of the First Wall/Blanket/Shield Design and Optimization System (BSDOS) have been developed to overcome this difficulty and to provide the state-of-the-art research and design tool for performing blanket design analyses. This paper describes some of the BSDOS capabilities and demonstrates its use. In addition, the use of the optimization capability of the BSDOS can result in a significant blanket performance enhancement and cost saving for the reactor design under consideration. In this paper, examples are presented, which utilize an earlier version of the ITER solid breeder blanket design and a high power density self-cooled lithium blanket design for demonstrating some of the BSDOS blanket design capabilities.« less

  10. Differences in attentional strategies by novice and experienced operating theatre scrub nurses.

    PubMed

    Koh, Ranieri Y I; Park, Taezoon; Wickens, Christopher D; Ong, Lay Teng; Chia, Soon Noi

    2011-09-01

    This study investigated the effect of nursing experience on attention allocation and task performance during surgery. The prevention of cases of retained foreign bodies after surgery typically depends on scrub nurses, who are responsible for performing multiple tasks that impose heavy demands on the nurses' cognitive resources. However, the relationship between the level of experiences and attention allocation strategies has not been extensively studied. Eye movement data were collected from 10 novice and 10 experienced scrub nurses in the operating theater for caesarean section surgeries. Visual scanning data, analyzed by dividing the workstation into four main areas and the surgery into four stages, were compared to the optimum expected value estimated by SEEV (Salience, Effort, Expectancy, and Value) model. Both experienced and novice nurses showed significant correlations to the optimal percentage dwell time values, and significant differences were found in attention allocation optimality between experienced and novice nurses, with experienced nurses adhering significantly more to the optimal in the stages of high workload. Experienced nurses spent less time on the final count and encountered fewer interruptions during the count than novices indicating better performance in task management, whereas novice nurses switched attention between areas of interest more than experienced nurses. The results provide empirical evidence of a relationship between the application of optimal visual attention management strategies and performance, opening up possibilities to the development of visual attention and interruption training for better performance. (c) 2011 APA, all rights reserved.

  11. A predictive control framework for optimal energy extraction of wind farms

    NASA Astrophysics Data System (ADS)

    Vali, M.; van Wingerden, J. W.; Boersma, S.; Petrović, V.; Kühn, M.

    2016-09-01

    This paper proposes an adjoint-based model predictive control for optimal energy extraction of wind farms. It employs the axial induction factor of wind turbines to influence their aerodynamic interactions through the wake. The performance index is defined here as the total power production of the wind farm over a finite prediction horizon. A medium-fidelity wind farm model is utilized to predict the inflow propagation in advance. The adjoint method is employed to solve the formulated optimization problem in a cost effective way and the first part of the optimal solution is implemented over the control horizon. This procedure is repeated at the next controller sample time providing the feedback into the optimization. The effectiveness and some key features of the proposed approach are studied for a two turbine test case through simulations.

  12. Exploring optimal topology of thermal cloaks by CMA-ES

    NASA Astrophysics Data System (ADS)

    Fujii, Garuda; Akimoto, Youhei; Takahashi, Masayuki

    2018-02-01

    This paper presents topology optimization for thermal cloaks expressed by level-set functions and explored using the covariance matrix adaptation evolution strategy (CMA-ES). Designed optimal configurations provide superior performances in thermal cloaks for the steady-state thermal conduction and succeed in realizing thermal invisibility, despite the structures being simply composed of iron and aluminum and without inhomogeneities caused by employing metamaterials. To design thermal cloaks, a prescribed objective function is used to evaluate the difference between the temperature field controlled by a thermal cloak and when no thermal insulator is present. The CMA-ES involves searches for optimal sets of level-set functions as design variables that minimize a regularized fitness involving a perimeter constraint. Through topology optimization subject to structural symmetries about four axes, we obtain a concept design of a thermal cloak that functions in an isotropic heat flux.

  13. Optimal Base Station Density of Dense Network: From the Viewpoint of Interference and Load.

    PubMed

    Feng, Jianyuan; Feng, Zhiyong

    2017-09-11

    Network densification is attracting increasing attention recently due to its ability to improve network capacity by spatial reuse and relieve congestion by offloading. However, excessive densification and aggressive offloading can also cause the degradation of network performance due to problems of interference and load. In this paper, with consideration of load issues, we study the optimal base station density that maximizes the throughput of the network. The expected link rate and the utilization ratio of the contention-based channel are derived as the functions of base station density using the Poisson Point Process (PPP) and Markov Chain. They reveal the rules of deployment. Based on these results, we obtain the throughput of the network and indicate the optimal deployment density under different network conditions. Extensive simulations are conducted to validate our analysis and show the substantial performance gain obtained by the proposed deployment scheme. These results can provide guidance for the network densification.

  14. Performance evaluation and optimization of multiband phase-modulated radio over IsOWC link with balanced coherent homodyne detection

    NASA Astrophysics Data System (ADS)

    Zong, Kang; Zhu, Jiang

    2018-04-01

    In this paper, we present a multiband phase-modulated (PM) radio over intersatellite optical wireless communication (IsOWC) link with balanced coherent homodyne detection. The proposed system can provide the transparent transport of multiband radio frequency (RF) signals with higher linearity and better receiver sensitivity than intensity modulated with direct detection (IM/DD) system. The expressions of RF gain, noise figure (NF) and third-order spurious-free dynamic range (SFDR) are derived considering the third-order intermodulation product and amplifier spontaneous emission (ASE) noise. The optimal power of local oscillator (LO) optical signal is also derived theoretically. Numerical results for RF gain, NF and third-order SFDR are given for demonstration. Results indicate that the gain of the optical preamplifier and the power of LO optical signal should be optimized to obtain the satisfactory performance.

  15. Declarative language design for interactive visualization.

    PubMed

    Heer, Jeffrey; Bostock, Michael

    2010-01-01

    We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.

  16. Comparison of Non-Invasive Individual Monitoring of the Training and Health of Athletes with Commercially Available Wearable Technologies

    PubMed Central

    Düking, Peter; Hotho, Andreas; Holmberg, Hans-Christer; Fuss, Franz Konstantin; Sperlich, Billy

    2016-01-01

    Athletes adapt their training daily to optimize performance, as well as avoid fatigue, overtraining and other undesirable effects on their health. To optimize training load, each athlete must take his/her own personal objective and subjective characteristics into consideration and an increasing number of wearable technologies (wearables) provide convenient monitoring of various parameters. Accordingly, it is important to help athletes decide which parameters are of primary interest and which wearables can monitor these parameters most effectively. Here, we discuss the wearable technologies available for non-invasive monitoring of various parameters concerning an athlete's training and health. On the basis of these considerations, we suggest directions for future development. Furthermore, we propose that a combination of several wearables is most effective for accessing all relevant parameters, disturbing the athlete as little as possible, and optimizing performance and promoting health. PMID:27014077

  17. Performance of the Extravehicular Mobility Unit (EMU) Airlock Coolant Loop Remediation (A/L CLR) Hardware - Final

    NASA Technical Reports Server (NTRS)

    Steele, John W.; Rector, Tony; Gazda, Daniel; Lewis, John

    2011-01-01

    An EMU water processing kit (Airlock Coolant Loop Recovery -- A/L CLR) was developed as a corrective action to Extravehicular Mobility Unit (EMU) coolant flow disruptions experienced on the International Space Station (ISS) in May of 2004 and thereafter. A conservative duty cycle and set of use parameters for A/L CLR use and component life were initially developed and implemented based on prior analysis results and analytical modeling. Several initiatives were undertaken to optimize the duty cycle and use parameters of the hardware. Examination of post-flight samples and EMU Coolant Loop hardware provided invaluable information on the performance of the A/L CLR and has allowed for an optimization of the process. The intent of this paper is to detail the evolution of the A/L CLR hardware, efforts to optimize the duty cycle and use parameters, and the final recommendations for implementation in the post-Shuttle retirement era.

  18. Optimal Design of MPPT Controllers for Grid Connected Photovoltaic Array System

    NASA Astrophysics Data System (ADS)

    Ebrahim, M. A.; AbdelHadi, H. A.; Mahmoud, H. M.; Saied, E. M.; Salama, M. M.

    2016-10-01

    Integrating photovoltaic (PV) plants into electric power system exhibits challenges to power system dynamic performance. These challenges stem primarily from the natural characteristics of PV plants, which differ in some respects from the conventional plants. The most significant challenge is how to extract and regulate the maximum power from the sun. This paper presents the optimal design for the most commonly used Maximum Power Point Tracking (MPPT) techniques based on Proportional Integral tuned by Particle Swarm Optimization (PI-PSO). These suggested techniques are, (1) the incremental conductance, (2) perturb and observe, (3) fractional short circuit current and (4) fractional open circuit voltage techniques. This research work provides a comprehensive comparative study with the energy availability ratio from photovoltaic panels. The simulation results proved that the proposed controllers have an impressive tracking response. The system dynamic performance improved greatly using the proposed controllers.

  19. An Analytical Solution for Yaw Maneuver Optimization on the International Space Station and Other Orbiting Space Vehicles

    NASA Technical Reports Server (NTRS)

    Dobrinskaya, Tatiana

    2015-01-01

    This paper suggests a new method for optimizing yaw maneuvers on the International Space Station (ISS). Yaw rotations are the most common large maneuvers on the ISS often used for docking and undocking operations, as well as for other activities. When maneuver optimization is used, large maneuvers, which were performed on thrusters, could be performed either using control moment gyroscopes (CMG), or with significantly reduced thruster firings. Maneuver optimization helps to save expensive propellant and reduce structural loads - an important factor for the ISS service life. In addition, optimized maneuvers reduce contamination of the critical elements of the vehicle structure, such as solar arrays. This paper presents an analytical solution for optimizing yaw attitude maneuvers. Equations describing pitch and roll motion needed to counteract the major torques during a yaw maneuver are obtained. A yaw rate profile is proposed. Also the paper describes the physical basis of the suggested optimization approach. In the obtained optimized case, the torques are significantly reduced. This torque reduction was compared to the existing optimization method which utilizes the computational solution. It was shown that the attitude profiles and the torque reduction have a good match for these two methods of optimization. The simulations using the ISS flight software showed similar propellant consumption for both methods. The analytical solution proposed in this paper has major benefits with respect to computational approach. In contrast to the current computational solution, which only can be calculated on the ground, the analytical solution does not require extensive computational resources, and can be implemented in the onboard software, thus, making the maneuver execution automatic. The automatic maneuver significantly simplifies the operations and, if necessary, allows to perform a maneuver without communication with the ground. It also reduces the probability of command errors. The suggested analytical solution provides a new method of maneuver optimization which is less complicated, automatic and more universal. A maneuver optimization approach, presented in this paper, can be used not only for the ISS, but for other orbiting space vehicles.

  20. SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.

    2013-12-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes. We will conclude with a discussion of planned extensions to SpF that will provide pseudospectral applications with additional flexibility with regard to time integration, linear solvers, and discretization in the radial direction.

  1. Computer Program for Analysis, Design and Optimization of Propulsion, Dynamics, and Kinematics of Multistage Rockets

    NASA Astrophysics Data System (ADS)

    Lali, Mehdi

    2009-03-01

    A comprehensive computer program is designed in MATLAB to analyze, design and optimize the propulsion, dynamics, thermodynamics, and kinematics of any serial multi-staging rocket for a set of given data. The program is quite user-friendly. It comprises two main sections: "analysis and design" and "optimization." Each section has a GUI (Graphical User Interface) in which the rocket's data are entered by the user and by which the program is run. The first section analyzes the performance of the rocket that is previously devised by the user. Numerous plots and subplots are provided to display the performance of the rocket. The second section of the program finds the "optimum trajectory" via billions of iterations and computations which are done through sophisticated algorithms using numerical methods and incremental integrations. Innovative techniques are applied to calculate the optimal parameters for the engine and designing the "optimal pitch program." This computer program is stand-alone in such a way that it calculates almost every design parameter in regards to rocket propulsion and dynamics. It is meant to be used for actual launch operations as well as educational and research purposes.

  2. CORSS: Cylinder Optimization of Rings, Skin, and Stringers

    NASA Technical Reports Server (NTRS)

    Finckenor, J.; Rogers, P.; Otte, N.

    1994-01-01

    Launch vehicle designs typically make extensive use of cylindrical skin stringer construction. Structural analysis methods are well developed for preliminary design of this type of construction. This report describes an automated, iterative method to obtain a minimum weight preliminary design. Structural optimization has been researched extensively, and various programs have been written for this purpose. Their complexity and ease of use depends on their generality, the failure modes considered, the methodology used, and the rigor of the analysis performed. This computer program employs closed-form solutions from a variety of well-known structural analysis references and joins them with a commercially available numerical optimizer called the 'Design Optimization Tool' (DOT). Any ring and stringer stiffened shell structure of isotropic materials that has beam type loading can be analyzed. Plasticity effects are not included. It performs a more limited analysis than programs such as PANDA, but it provides an easy and useful preliminary design tool for a large class of structures. This report briefly describes the optimization theory, outlines the development and use of the program, and describes the analysis techniques that are used. Examples of program input and output, as well as the listing of the analysis routines, are included.

  3. Design optimization of a fuzzy distributed generation (DG) system with multiple renewable energy sources

    NASA Astrophysics Data System (ADS)

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2012-09-01

    The global rise in energy demands brings major obstacles to many energy organizations in providing adequate energy supply. Hence, many techniques to generate cost effective, reliable and environmentally friendly alternative energy source are being explored. One such method is the integration of photovoltaic cells, wind turbine generators and fuel-based generators, included with storage batteries. This sort of power systems are known as distributed generation (DG) power system. However, the application of DG power systems raise certain issues such as cost effectiveness, environmental impact and reliability. The modelling as well as the optimization of this DG power system was successfully performed in the previous work using Particle Swarm Optimization (PSO). The central idea of that work was to minimize cost, minimize emissions and maximize reliability (multi-objective (MO) setting) with respect to the power balance and design requirements. In this work, we introduce a fuzzy model that takes into account the uncertain nature of certain variables in the DG system which are dependent on the weather conditions (such as; the insolation and wind speed profiles). The MO optimization in a fuzzy environment was performed by applying the Hopfield Recurrent Neural Network (HNN). Analysis on the optimized results was then carried out.

  4. Phase-Division-Based Dynamic Optimization of Linkages for Drawing Servo Presses

    NASA Astrophysics Data System (ADS)

    Zhang, Zhi-Gang; Wang, Li-Ping; Cao, Yan-Ke

    2017-11-01

    Existing linkage-optimization methods are designed for mechanical presses; few can be directly used for servo presses, so development of the servo press is limited. Based on the complementarity of linkage optimization and motion planning, a phase-division-based linkage-optimization model for a drawing servo press is established. Considering the motion-planning principles of a drawing servo press, and taking account of work rating and efficiency, the constraints of the optimization model are constructed. Linkage is optimized in two modes: use of either constant eccentric speed or constant slide speed in the work segments. The performances of optimized linkages are compared with those of a mature linkage SL4-2000A, which is optimized by a traditional method. The results show that the work rating of a drawing servo press equipped with linkages optimized by this new method improved and the root-mean-square torque of the servo motors is reduced by more than 10%. This research provides a promising method for designing energy-saving drawing servo presses with high work ratings.

  5. Optimal service distribution in WSN service system subject to data security constraints.

    PubMed

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Gu, Qiong

    2014-08-04

    Services composition technology provides a flexible approach to building Wireless Sensor Network (WSN) Service Applications (WSA) in a service oriented tasking system for WSN. Maintaining the data security of WSA is one of the most important goals in sensor network research. In this paper, we consider a WSN service oriented tasking system in which the WSN Services Broker (WSB), as the resource management center, can map the service request from user into a set of atom-services (AS) and send them to some independent sensor nodes (SN) for parallel execution. The distribution of ASs among these SNs affects the data security as well as the reliability and performance of WSA because these SNs can be of different and independent specifications. By the optimal service partition into the ASs and their distribution among SNs, the WSB can provide the maximum possible service reliability and/or expected performance subject to data security constraints. This paper proposes an algorithm of optimal service partition and distribution based on the universal generating function (UGF) and the genetic algorithm (GA) approach. The experimental analysis is presented to demonstrate the feasibility of the suggested algorithm.

  6. Optimal Service Distribution in WSN Service System Subject to Data Security Constraints

    PubMed Central

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Gu, Qiong

    2014-01-01

    Services composition technology provides a flexible approach to building Wireless Sensor Network (WSN) Service Applications (WSA) in a service oriented tasking system for WSN. Maintaining the data security of WSA is one of the most important goals in sensor network research. In this paper, we consider a WSN service oriented tasking system in which the WSN Services Broker (WSB), as the resource management center, can map the service request from user into a set of atom-services (AS) and send them to some independent sensor nodes (SN) for parallel execution. The distribution of ASs among these SNs affects the data security as well as the reliability and performance of WSA because these SNs can be of different and independent specifications. By the optimal service partition into the ASs and their distribution among SNs, the WSB can provide the maximum possible service reliability and/or expected performance subject to data security constraints. This paper proposes an algorithm of optimal service partition and distribution based on the universal generating function (UGF) and the genetic algorithm (GA) approach. The experimental analysis is presented to demonstrate the feasibility of the suggested algorithm. PMID:25093346

  7. Multi-point optimization of recirculation flow type casing treatment in centrifugal compressors

    NASA Astrophysics Data System (ADS)

    Tun, Min Thaw; Sakaguchi, Daisaku

    2016-06-01

    High-pressure ratio and wide operating range are highly required for a turbocharger in diesel engines. A recirculation flow type casing treatment is effective for flow range enhancement of centrifugal compressors. Two ring grooves on a suction pipe and a shroud casing wall are connected by means of an annular passage and stable recirculation flow is formed at small flow rates from the downstream groove toward the upstream groove through the annular bypass. The shape of baseline recirculation flow type casing is modified and optimized by using a multi-point optimization code with a metamodel assisted evolutionary algorithm embedding a commercial CFD code CFX from ANSYS. The numerical optimization results give the optimized design of casing with improving adiabatic efficiency in wide operating flow rate range. Sensitivity analysis of design parameters as a function of efficiency has been performed. It is found that the optimized casing design provides optimized recirculation flow rate, in which an increment of entropy rise is minimized at grooves and passages of the rotating impeller.

  8. Optimal strategy analysis based on robust predictive control for inventory system with random demand

    NASA Astrophysics Data System (ADS)

    Saputra, Aditya; Widowati, Sutrisno

    2017-12-01

    In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.

  9. Optimal Solutions of Multiproduct Batch Chemical Process Using Multiobjective Genetic Algorithm with Expert Decision System

    PubMed Central

    Mokeddem, Diab; Khellaf, Abdelhafid

    2009-01-01

    Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537

  10. Audit and feedback interventions to improve endoscopist performance: Principles and effectiveness.

    PubMed

    Tinmouth, Jill; Patel, Jigisha; Hilsden, Robert J; Ivers, Noah; Llovet, Diego

    2016-06-01

    There is considerable variation in the quality of colonoscopy, attributable in part to endoscopist performance. Audit and feedback (A&F) provides health professionals with a summary of their performance over a period of time and is a common strategy used to improve provider performance. In this review, we discuss current understanding of the mechanism of A&F and describe specific features of effective A&F. To date, trials of A&F to improve colonoscopy performance report heterogeneous results, in part because colonoscopy is a complex procedural skill but also because the quality improvement interventions were sub-optimally implemented or inadequately evaluated. Nonetheless, evidence from a wide range of literature suggests that A&F has the potential to improve endoscopist performance. We discuss future directions for research in this area and provide guidance for providers or health system planners wishing to implement A&F to address quality of colonoscopy in their practice and/or jurisdiction. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. A Framework for Dimensioning VDL-2 Air-Ground Networks

    NASA Technical Reports Server (NTRS)

    Ribeiro, Leila Z.; Monticone, Leone C.; Snow, Richard E.; Box, Frank; Apaza, Rafel; Bretmersky, Steven

    2014-01-01

    This paper describes a framework developed at MITRE for dimensioning a Very High Frequency (VHF) Digital Link Mode 2 (VDL-2) Air-to-Ground network. This framework was developed to support the FAA's Data Communications (Data Comm) program by providing estimates of expected capacity required for the air-ground network services that will support Controller-Pilot-Data-Link Communications (CPDLC), as well as the spectrum needed to operate the system at required levels of performance. The Data Comm program is part of the FAA's NextGen initiative to implement advanced communication capabilities in the National Airspace System (NAS). The first component of the framework is the radio-frequency (RF) coverage design for the network ground stations. Then we proceed to describe the approach used to assess the aircraft geographical distribution and the data traffic demand expected in the network. The next step is the resource allocation utilizing optimization algorithms developed in MITRE's Spectrum ProspectorTM tool to propose frequency assignment solutions, and a NASA-developed VDL-2 tool to perform simulations and determine whether a proposed plan meets the desired performance requirements. The framework presented is capable of providing quantitative estimates of multiple variables related to the air-ground network, in order to satisfy established coverage, capacity and latency performance requirements. Outputs include: coverage provided at different altitudes; data capacity required in the network, aggregated or on a per ground station basis; spectrum (pool of frequencies) needed for the system to meet a target performance; optimized frequency plan for a given scenario; expected performance given spectrum available; and, estimates of throughput distributions for a given scenario. We conclude with a discussion aimed at providing insight into the tradeoffs and challenges identified with respect to radio resource management for VDL-2 air-ground networks.

  12. Optimizing the Usability of Brain-Computer Interfaces.

    PubMed

    Zhang, Yin; Chase, Steve M

    2018-05-01

    Brain-computer interfaces are in the process of moving from the laboratory to the clinic. These devices act by reading neural activity and using it to directly control a device, such as a cursor on a computer screen. An open question in the field is how to map neural activity to device movement in order to achieve the most proficient control. This question is complicated by the fact that learning, especially the long-term skill learning that accompanies weeks of practice, can allow subjects to improve performance over time. Typical approaches to this problem attempt to maximize the biomimetic properties of the device in order to limit the need for extensive training. However, it is unclear if this approach would ultimately be superior to performance that might be achieved with a nonbiomimetic device once the subject has engaged in extended practice and learned how to use it. Here we approach this problem using ideas from optimal control theory. Under the assumption that the brain acts as an optimal controller, we present a formal definition of the usability of a device and show that the optimal postlearning mapping can be written as the solution of a constrained optimization problem. We then derive the optimal mappings for particular cases common to most brain-computer interfaces. Our results suggest that the common approach of creating biomimetic interfaces may not be optimal when learning is taken into account. More broadly, our method provides a blueprint for optimal device design in general control-theoretic contexts.

  13. PAVENET OS: A Compact Hard Real-Time Operating System for Precise Sampling in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Saruwatari, Shunsuke; Suzuki, Makoto; Morikawa, Hiroyuki

    The paper shows a compact hard real-time operating system for wireless sensor nodes called PAVENET OS. PAVENET OS provides hybrid multithreading: preemptive multithreading and cooperative multithreading. Both of the multithreading are optimized for two kinds of tasks on wireless sensor networks, and those are real-time tasks and best-effort ones. PAVENET OS can efficiently perform hard real-time tasks that cannot be performed by TinyOS. The paper demonstrates the hybrid multithreading realizes compactness and low overheads, which are comparable to those of TinyOS, through quantitative evaluation. The evaluation results show PAVENET OS performs 100 Hz sensor sampling with 0.01% jitter while performing wireless communication tasks, whereas optimized TinyOS has 0.62% jitter. In addition, PAVENET OS has a small footprint and low overheads (minimum RAM size: 29 bytes, minimum ROM size: 490 bytes, minimum task switch time: 23 cycles).

  14. Calculated performance, stability and maneuverability of high-speed tilting-prop-rotor aircraft

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne; Lau, Benton H.; Bowles, Jeffrey V.

    1986-01-01

    The feasibility of operating tilting-prop-rotor aircraft at high speeds is examined by calculating the performance, stability, and maneuverability of representative configurations. The rotor performance is examined in high-speed cruise and in hover. The whirl-flutter stability of the coupled-wing and rotor motion is calculated in the cruise mode. Maneuverability is examined in terms of the rotor-thrust limit during turns in helicopter configuration. Rotor airfoils, rotor-hub configuration, wing airfoil, and airframe structural weights representing demonstrated advance technology are discussed. Key rotor and airframe parameters are optimized for high-speed performance and stability. The basic aircraft-design parameters are optimized for minimum gross weight. To provide a focus for the calculations, two high-speed tilt-rotor aircraft are considered: a 46-passenger, civil transport and an air-combat/escort fighter, both with design speeds of about 400 knots. It is concluded that such high-speed tilt-rotor aircraft are quite practical.

  15. Best Practices for Optimizing DoD Contractor Safety and Occupational Health Program Performance

    DTIC Science & Technology

    2012-12-01

    such as Accident Prevention Plan (APP), Activity Hazard Analysis (AHA), Quality Assurance Surveillance Plans (QASP), etc. Contract administration...technology support, medical , and maintenance of equipment and facilities. The DoD Guidebook for the Acquisition of Services, provides acquisition...OSHA regulations and perform in accordance with an applicable accident prevention program that complies with State and Federal requirements. The

  16. ACT Payload Shroud Structural Concept Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Zalewski, Bart B.; Bednarcyk, Brett A.

    2010-01-01

    Aerospace structural applications demand a weight efficient design to perform in a cost effective manner. This is particularly true for launch vehicle structures, where weight is the dominant design driver. The design process typically requires many iterations to ensure that a satisfactory minimum weight has been obtained. Although metallic structures can be weight efficient, composite structures can provide additional weight savings due to their lower density and additional design flexibility. This work presents structural analysis and weight optimization of a composite payload shroud for NASA s Ares V heavy lift vehicle. Two concepts, which were previously determined to be efficient for such a structure are evaluated: a hat stiffened/corrugated panel and a fiber reinforced foam sandwich panel. A composite structural optimization code, HyperSizer, is used to optimize the panel geometry, composite material ply orientations, and sandwich core material. HyperSizer enables an efficient evaluation of thousands of potential designs versus multiple strength and stability-based failure criteria across multiple load cases. HyperSizer sizing process uses a global finite element model to obtain element forces, which are statistically processed to arrive at panel-level design-to loads. These loads are then used to analyze each candidate panel design. A near optimum design is selected as the one with the lowest weight that also provides all positive margins of safety. The stiffness of each newly sized panel or beam component is taken into account in the subsequent finite element analysis. Iteration of analysis/optimization is performed to ensure a converged design. Sizing results for the hat stiffened panel concept and the fiber reinforced foam sandwich concept are presented.

  17. Airline Maintenance Manpower Optimization from the De Novo Perspective

    NASA Astrophysics Data System (ADS)

    Liou, James J. H.; Tzeng, Gwo-Hshiung

    Human resource management (HRM) is an important issue for today’s competitive airline marketing. In this paper, we discuss a multi-objective model designed from the De Novo perspective to help airlines optimize their maintenance manpower portfolio. The effectiveness of the model and solution algorithm is demonstrated in an empirical study of the optimization of the human resources needed for airline line maintenance. Both De Novo and traditional multiple objective programming (MOP) methods are analyzed. A comparison of the results with those of traditional MOP indicates that the proposed model and solution algorithm does provide better performance and an improved human resource portfolio.

  18. Application of the optimal homotopy asymptotic method to nonlinear Bingham fluid dampers

    NASA Astrophysics Data System (ADS)

    Marinca, Vasile; Ene, Remus-Daniel; Bereteu, Liviu

    2017-10-01

    Dynamic response time is an important feature for determining the performance of magnetorheological (MR) dampers in practical civil engineering applications. The objective of this paper is to show how to use the Optimal Homotopy Asymptotic Method (OHAM) to give approximate analytical solutions of the nonlinear differential equation of a modified Bingham model with non-viscous exponential damping. Our procedure does not depend upon small parameters and provides us with a convenient way to optimally control the convergence of the approximate solutions. OHAM is very efficient in practice for ensuring very rapid convergence of the solution after only one iteration and with a small number of steps.

  19. Optimal active vibration absorber: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1992-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  20. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  1. Virtually optimized insoles for offloading the diabetic foot: A randomized crossover study.

    PubMed

    Telfer, S; Woodburn, J; Collier, A; Cavanagh, P R

    2017-07-26

    Integration of objective biomechanical measures of foot function into the design process for insoles has been shown to provide enhanced plantar tissue protection for individuals at-risk of plantar ulceration. The use of virtual simulations utilizing numerical modeling techniques offers a potential approach to further optimize these devices. In a patient population at-risk of foot ulceration, we aimed to compare the pressure offloading performance of insoles that were optimized via numerical simulation techniques against shape-based devices. Twenty participants with diabetes and at-risk feet were enrolled in this study. Three pairs of personalized insoles: one based on shape data and subsequently manufactured via direct milling; and two were based on a design derived from shape, pressure, and ultrasound data which underwent a finite element analysis-based virtual optimization procedure. For the latter set of insole designs, one pair was manufactured via direct milling, and a second pair was manufactured through 3D printing. The offloading performance of the insoles was analyzed for forefoot regions identified as having elevated plantar pressures. In 88% of the regions of interest, the use of virtually optimized insoles resulted in lower peak plantar pressures compared to the shape-based devices. Overall, the virtually optimized insoles significantly reduced peak pressures by a mean of 41.3kPa (p<0.001, 95% CI [31.1, 51.5]) for milled and 40.5kPa (p<0.001, 95% CI [26.4, 54.5]) for printed devices compared to shape-based insoles. The integration of virtual optimization into the insole design process resulted in improved offloading performance compared to standard, shape-based devices. ISRCTN19805071, www.ISRCTN.org. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. A stepwise approach for the reproducible optimization of PAMO expression in Escherichia coli for whole-cell biocatalysis

    PubMed Central

    2012-01-01

    Background Baeyer-Villiger monooxygenases (BVMOs) represent a group of enzymes of considerable biotechnological relevance as illustrated by their growing use as biocatalyst in a variety of synthetic applications. However, due to their increased use the reproducible expression of BVMOs and other biotechnologically relevant enzymes has become a pressing matter while knowledge about the factors governing their reproducible expression is scattered. Results Here, we have used phenylacetone monooxygenase (PAMO) from Thermobifida fusca, a prototype Type I BVMO, as a model enzyme to develop a stepwise strategy to optimize the biotransformation performance of recombinant E. coli expressing PAMO in 96-well microtiter plates in a reproducible fashion. Using this system, the best expression conditions of PAMO were investigated first, including different host strains, temperature as well as time and induction period for PAMO expression. This optimized system was used next to improve biotransformation conditions, the PAMO-catalyzed conversion of phenylacetone, by evaluating the best electron donor, substrate concentration, and the temperature and length of biotransformation. Combining all optimized parameters resulted in a more than four-fold enhancement of the biocatalytic performance and, importantly, this was highly reproducible as indicated by the relative standard deviation of 1% for non-washed cells and 3% for washed cells. Furthermore, the optimized procedure was successfully adapted for activity-based mutant screening. Conclusions Our optimized procedure, which provides a comprehensive overview of the key factors influencing the reproducible expression and performance of a biocatalyst, is expected to form a rational basis for the optimization of miniaturized biotransformations and for the design of novel activity-based screening procedures suitable for BVMOs and other NAD(P)H-dependent enzymes as well. PMID:22720747

  3. Imaging performance of an isotropic negative dielectric constant slab.

    PubMed

    Shivanand; Liu, Huikan; Webb, Kevin J

    2008-11-01

    The influence of material and thickness on the subwavelength imaging performance of a negative dielectric constant slab is studied. Resonance in the plane-wave transfer function produces a high spatial frequency ripple that could be useful in fabricating periodic structures. A cost function based on the plane-wave transfer function provides a useful metric to evaluate the planar slab lens performance, and using this, the optimal slab dielectric constant can be determined.

  4. A Scalable, Parallel Approach for Multi-Point, High-Fidelity Aerostructural Optimization of Aircraft Configurations

    NASA Astrophysics Data System (ADS)

    Kenway, Gaetan K. W.

    This thesis presents new tools and techniques developed to address the challenging problem of high-fidelity aerostructural optimization with respect to large numbers of design variables. A new mesh-movement scheme is developed that is both computationally efficient and sufficiently robust to accommodate large geometric design changes and aerostructural deformations. A fully coupled Newton-Krylov method is presented that accelerates the convergence of aerostructural systems and provides a 20% performance improvement over the traditional nonlinear block Gauss-Seidel approach and can handle more exible structures. A coupled adjoint method is used that efficiently computes derivatives for a gradient-based optimization algorithm. The implementation uses only machine accurate derivative techniques and is verified to yield fully consistent derivatives by comparing against the complex step method. The fully-coupled large-scale coupled adjoint solution method is shown to have 30% better performance than the segregated approach. The parallel scalability of the coupled adjoint technique is demonstrated on an Euler Computational Fluid Dynamics (CFD) model with more than 80 million state variables coupled to a detailed structural finite-element model of the wing with more than 1 million degrees of freedom. Multi-point high-fidelity aerostructural optimizations of a long-range wide-body, transonic transport aircraft configuration are performed using the developed techniques. The aerostructural analysis employs Euler CFD with a 2 million cell mesh and a structural finite element model with 300 000 DOF. Two design optimization problems are solved: one where takeoff gross weight is minimized, and another where fuel burn is minimized. Each optimization uses a multi-point formulation with 5 cruise conditions and 2 maneuver conditions. The optimization problems have 476 design variables are optimal results are obtained within 36 hours of wall time using 435 processors. The TOGW minimization results in a 4.2% reduction in TOGW with a 6.6% fuel burn reduction, while the fuel burn optimization resulted in a 11.2% fuel burn reduction with no change to the takeoff gross weight.

  5. Neural dynamic programming and its application to control systems

    NASA Astrophysics Data System (ADS)

    Seong, Chang-Yun

    There are few general practical feedback control methods for nonlinear MIMO (multi-input-multi-output) systems, although such methods exist for their linear counterparts. Neural Dynamic Programming (NDP) is proposed as a practical design method of optimal feedback controllers for nonlinear MIMO systems. NDP is an offspring of both neural networks and optimal control theory. In optimal control theory, the optimal solution to any nonlinear MIMO control problem may be obtained from the Hamilton-Jacobi-Bellman equation (HJB) or the Euler-Lagrange equations (EL). The two sets of equations provide the same solution in different forms: EL leads to a sequence of optimal control vectors, called Feedforward Optimal Control (FOC); HJB yields a nonlinear optimal feedback controller, called Dynamic Programming (DP). DP produces an optimal solution that can reject disturbances and uncertainties as a result of feedback. Unfortunately, computation and storage requirements associated with DP solutions can be problematic, especially for high-order nonlinear systems. This dissertation presents an approximate technique for solving the DP problem based on neural network techniques that provides many of the performance benefits (e.g., optimality and feedback) of DP and benefits from the numerical properties of neural networks. We formulate neural networks to approximate optimal feedback solutions whose existence DP justifies. We show the conditions under which NDP closely approximates the optimal solution. Finally, we introduce the learning operator characterizing the learning process of the neural network in searching the optimal solution. The analysis of the learning operator provides not only a fundamental understanding of the learning process in neural networks but also useful guidelines for selecting the number of weights of the neural network. As a result, NDP finds---with a reasonable amount of computation and storage---the optimal feedback solutions to nonlinear MIMO control problems that would be very difficult to solve with DP. NDP was demonstrated on several applications such as the lateral autopilot logic for a Boeing 747, the minimum fuel control of a double-integrator plant with bounded control, the backward steering of a two-trailer truck, and the set-point control of a two-link robot arm.

  6. The Study of an Integrated Rating System for Supplier Quality Performance in the Semiconductor Industry

    NASA Astrophysics Data System (ADS)

    Lee, Yu-Cheng; Yen, Tieh-Min; Tsai, Chih-Hung

    This study provides an integrated model of Supplier Quality Performance Assesment (SQPA) activity for the semiconductor industry through introducing the ISO 9001 management framework, Importance-Performance Analysis (IPA) Supplier Quality Performance Assesment and Taguchi`s Signal-to-Noise Ratio (S/N) techniques. This integrated model provides a SQPA methodology to create value for all members under mutual cooperation and trust in the supply chain. This method helps organizations build a complete SQPA framework, linking organizational objectives and SQPA activities to optimize rating techniques to promote supplier quality improvement. The techniques used in SQPA activities are easily understood. A case involving a design house is illustrated to show our model.

  7. High-Fidelity Aerostructural Optimization of Nonplanar Wings for Commercial Transport Aircraft

    NASA Astrophysics Data System (ADS)

    Khosravi, Shahriar

    Although the aerospace sector is currently responsible for a relatively small portion of global anthropogenic greenhouse gas emissions, the growth of the airline industry raises serious concerns about the future of commercial aviation. As a result, the development of new aircraft design concepts with the potential to improve fuel efficiency remains an important priority. Numerical optimization based on high-fidelity physics has become an increasingly attractive tool over the past fifteen years in the search for environmentally friendly aircraft designs that reduce fuel consumption. This approach is able to discover novel design concepts and features that may never be considered without optimization. This can help reduce the economic costs and risks associated with developing new aircraft concepts by providing a more realistic assessment early in the design process. This thesis provides an assessment of the potential efficiency improvements obtained from nonplanar wings through the application of fully coupled high-fidelity aerostructural optimization. In this work, we conduct aerostructural optimization using the Euler equations to model the flow along with a viscous drag estimate based on the surface area. A major focus of the thesis is on finding the optimal shape and performance benefits of nonplanar wingtip devices. Two winglet configurations are considered: winglet-up and winglet-down. These are compared to optimized planar wings of the same projected span in order to quantify the possible drag reductions offered by winglets. In addition, the drooped wing is studied in the context of exploratory optimization. The main results show that the winglet-down configuration is the most efficient winglet shape, reducing the drag by approximately 2% at the same weight in comparison to a planar wing. There are two reasons for the superior performance of this design. First, this configuration moves the tip vortex further away from the wing. Second, the winglet-down concept has a higher projected span at the deflected state due to the structural deflections. Finally, the exploratory optimization studies lead to a drooped wing with the potential to increase range by 4.9% relative to a planar wing.

  8. Genetic algorithm approaches for conceptual design of spacecraft systems including multi-objective optimization and design under uncertainty

    NASA Astrophysics Data System (ADS)

    Hassan, Rania A.

    In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.

  9. Flight Test of an Adaptive Configuration Optimization System for Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn B.; Georgie, Jennifer; Barnicki, Joseph S.

    1999-01-01

    A NASA Dryden Flight Research Center program explores the practical application of real-time adaptive configuration optimization for enhanced transport performance on an L-1011 aircraft. This approach is based on calculation of incremental drag from forced-response, symmetric, outboard aileron maneuvers. In real-time operation, the symmetric outboard aileron deflection is directly optimized, and the horizontal stabilator and angle of attack are indirectly optimized. A flight experiment has been conducted from an onboard research engineering test station, and flight research results are presented herein. The optimization system has demonstrated the capability of determining the minimum drag configuration of the aircraft in real time. The drag-minimization algorithm is capable of identifying drag to approximately a one-drag-count level. Optimizing the symmetric outboard aileron position realizes a drag reduction of 2-3 drag counts (approximately 1 percent). Algorithm analysis of maneuvers indicate that two-sided raised-cosine maneuvers improve definition of the symmetric outboard aileron drag effect, thereby improving analysis results and consistency. Ramp maneuvers provide a more even distribution of data collection as a function of excitation deflection than raised-cosine maneuvers provide. A commercial operational system would require airdata calculations and normal output of current inertial navigation systems; engine pressure ratio measurements would be optional.

  10. Designing Industrial Networks Using Ecological Food Web Metrics.

    PubMed

    Layton, Astrid; Bras, Bert; Weissburg, Marc

    2016-10-18

    Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-based metrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.

  11. Control strategies for wind farm power optimization: LES study

    NASA Astrophysics Data System (ADS)

    Ciri, Umberto; Rotea, Mario; Leonardi, Stefano

    2017-11-01

    Turbines in wind farms operate in off-design conditions as wake interactions occur for particular wind directions. Advanced wind farm control strategies aim at coordinating and adjusting turbine operations to mitigate power losses in such conditions. Coordination is achieved by controlling on upstream turbines either the wake intensity, through the blade pitch angle or the generator torque, or the wake direction, through yaw misalignment. Downstream turbines can be adapted to work in waked conditions and limit power losses, using the blade pitch angle or the generator torque. As wind conditions in wind farm operations may change significantly, it is difficult to determine and parameterize the variations of the coordinated optimal settings. An alternative is model-free control and optimization of wind farms, which does not require any parameterization and can track the optimal settings as conditions vary. In this work, we employ a model-free optimization algorithm, extremum-seeking control, to find the optimal set-points of generator torque, blade pitch and yaw angle for a three-turbine configuration. Large-Eddy Simulations are used to provide a virtual environment to evaluate the performance of the control strategies under realistic, unsteady incoming wind. This work was supported by the National Science Foundation, Grants No. 1243482 (the WINDINSPIRE project) and IIP 1362033 (I/UCRC WindSTAR). TACC is acknowledged for providing computational time.

  12. Optimized Li-Ion Electrolytes Containing Fluorinated Ester Co-Solvents

    NASA Technical Reports Server (NTRS)

    Prakash, G. K. Surya; Smart, Marshall; Smith, Kiah; Bugga, Ratnakumar

    2010-01-01

    A number of experimental lithium-ion cells, consisting of MCMB (meso-carbon microbeads) carbon anodes and LiNi(0.8)Co(0.2)O2 cathodes, have been fabricated with increased safety and expanded capability. These cells serve to verify and demonstrate the reversibility, low-temperature performance, and electrochemical aspects of each electrode as determined from a number of electrochemical characterization techniques. A number of Li-ion electrolytes possessing fluorinated ester co-solvents, namely trifluoroethyl butyrate (TFEB) and trifluoroethyl propionate (TFEP), were demonstrated to deliver good performance over a wide temperature range in experimental lithium-ion cells. The general approach taken in the development of these electrolyte formulations is to optimize the type and composition of the co-solvents in ternary and quaternary solutions, focusing upon adequate stability [i.e., EC (ethylene carbonate) content needed for anode passivation, and EMC (ethyl methyl carbonate) content needed for lowering the viscosity and widening the temperature range, while still providing good stability], enhancing the inherent safety characteristics (incorporation of fluorinated esters), and widening the temperature range of operation (the use of both fluorinated and non-fluorinated esters). Further - more, the use of electrolyte additives, such as VC (vinylene carbonate) [solid electrolyte interface (SEI) promoter] and DMAc (thermal stabilizing additive), provide enhanced high-temperature life characteristics. Multi-component electrolyte formulations enhance performance over a temperature range of -60 to +60 C. With the need for more safety with the use of these batteries, flammability was a consideration. One of the solvents investigated, TFEB, had the best performance with improved low-temperature capability and high-temperature resilience. This work optimized the use of TFEB as a co-solvent by developing the multi-component electrolytes, which also contain non-halogenated esters, film forming additives, thermal stabilizing additives, and flame retardant additives. Further optimization of these electrolyte formulations is anticipated to yield improved performance. It is also anticipated that much improved performance will be demonstrated once these electrolyte solutions are incorporated into hermetically sealed, large capacity prototype cells, especially if effort is devoted to ensure that all electrolyte components are highly pure.

  13. Improving the Unsteady Aerodynamic Performance of Transonic Turbines using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.; Huber, Frank W.

    1999-01-01

    A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The procedure yielded a modified design that improves the aerodynamic performance through small changes to the reference design geometry. These results demonstrate the capabilities of the neural net-based design procedure, and also show the advantages of including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.

  14. The Aeronautical Data Link: Decision Framework for Architecture Analysis

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Goode, Plesent W.

    2003-01-01

    A decision analytic approach that develops optimal data link architecture configuration and behavior to meet multiple conflicting objectives of concurrent and different airspace operations functions has previously been developed. The approach, premised on a formal taxonomic classification that correlates data link performance with operations requirements, information requirements, and implementing technologies, provides a coherent methodology for data link architectural analysis from top-down and bottom-up perspectives. This paper follows the previous research by providing more specific approaches for mapping and transitioning between the lower levels of the decision framework. The goal of the architectural analysis methodology is to assess the impact of specific architecture configurations and behaviors on the efficiency, capacity, and safety of operations. This necessarily involves understanding the various capabilities, system level performance issues and performance and interface concepts related to the conceptual purpose of the architecture and to the underlying data link technologies. Efficient and goal-directed data link architectural network configuration is conditioned on quantifying the risks and uncertainties associated with complex structural interface decisions. Deterministic and stochastic optimal design approaches will be discussed that maximize the effectiveness of architectural designs.

  15. Genetic algorithm based task reordering to improve the performance of batch scheduled massively parallel scientific applications

    DOE PAGES

    Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael

    2015-04-08

    The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on themore » performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.« less

  16. A collimator optimization method for quantitative imaging: application to Y-90 bremsstrahlung SPECT.

    PubMed

    Rong, Xing; Frey, Eric C

    2013-08-01

    Post-therapy quantitative 90Y bremsstrahlung single photon emission computed tomography (SPECT) has shown great potential to provide reliable activity estimates, which are essential for dose verification. Typically 90Y imaging is performed with high- or medium-energy collimators. However, the energy spectrum of 90Y bremsstrahlung photons is substantially different than typical for these collimators. In addition, dosimetry requires quantitative images, and collimators are not typically optimized for such tasks. Optimizing a collimator for 90Y imaging is both novel and potentially important. Conventional optimization methods are not appropriate for 90Y bremsstrahlung photons, which have a continuous and broad energy distribution. In this work, the authors developed a parallel-hole collimator optimization method for quantitative tasks that is particularly applicable to radionuclides with complex emission energy spectra. The authors applied the proposed method to develop an optimal collimator for quantitative 90Y bremsstrahlung SPECT in the context of microsphere radioembolization. To account for the effects of the collimator on both the bias and the variance of the activity estimates, the authors used the root mean squared error (RMSE) of the volume of interest activity estimates as the figure of merit (FOM). In the FOM, the bias due to the null space of the image formation process was taken in account. The RMSE was weighted by the inverse mass to reflect the application to dosimetry; for a different application, more relevant weighting could easily be adopted. The authors proposed a parameterization for the collimator that facilitates the incorporation of the important factors (geometric sensitivity, geometric resolution, and septal penetration fraction) determining collimator performance, while keeping the number of free parameters describing the collimator small (i.e., two parameters). To make the optimization results for quantitative 90Y bremsstrahlung SPECT more general, the authors simulated multiple tumors of various sizes in the liver. The authors realistically simulated human anatomy using a digital phantom and the image formation process using a previously validated and computationally efficient method for modeling the image-degrading effects including object scatter, attenuation, and the full collimator-detector response (CDR). The scatter kernels and CDR function tables used in the modeling method were generated using a previously validated Monte Carlo simulation code. The hole length, hole diameter, and septal thickness of the obtained optimal collimator were 84, 3.5, and 1.4 mm, respectively. Compared to a commercial high-energy general-purpose collimator, the optimal collimator improved the resolution and FOM by 27% and 18%, respectively. The proposed collimator optimization method may be useful for improving quantitative SPECT imaging for radionuclides with complex energy spectra. The obtained optimal collimator provided a substantial improvement in quantitative performance for the microsphere radioembolization task considered.

  17. Development and testing of the cancer multidisciplinary team meeting observational tool (MDT-MOT)

    PubMed Central

    Harris, Jenny; Taylor, Cath; Sevdalis, Nick; Jalil, Rozh; Green, James S.A.

    2016-01-01

    Abstract Objective To develop a tool for independent observational assessment of cancer multidisciplinary team meetings (MDMs), and test criterion validity, inter-rater reliability/agreement and describe performance. Design Clinicians and experts in teamwork used a mixed-methods approach to develop and refine the tool. Study 1 observers rated pre-determined optimal/sub-optimal MDM film excerpts and Study 2 observers independently rated video-recordings of 10 MDMs. Setting Study 2 included 10 cancer MDMs in England. Participants Testing was undertaken by 13 health service staff and a clinical and non-clinical observer. Intervention None. Main Outcome Measures Tool development, validity, reliability/agreement and variability in MDT performance. Results Study 1: Observers were able to discriminate between optimal and sub-optimal MDM performance (P ≤ 0.05). Study 2: Inter-rater reliability was good for 3/10 domains. Percentage of absolute agreement was high (≥80%) for 4/10 domains and percentage agreement within 1 point was high for 9/10 domains. Four MDTs performed well (scored 3+ in at least 8/10 domains), 5 MDTs performed well in 6–7 domains and 1 MDT performed well in only 4 domains. Leadership and chairing of the meeting, the organization and administration of the meeting, and clinical decision-making processes all varied significantly between MDMs (P ≤ 0.01). Conclusions MDT-MOT demonstrated good criterion validity. Agreement between clinical and non-clinical observers (within one point on the scale) was high but this was inconsistent with reliability coefficients and warrants further investigation. If further validated MDT-MOT might provide a useful mechanism for the routine assessment of MDMs by the local workforce to drive improvements in MDT performance. PMID:27084499

  18. Development and testing of the cancer multidisciplinary team meeting observational tool (MDT-MOT).

    PubMed

    Harris, Jenny; Taylor, Cath; Sevdalis, Nick; Jalil, Rozh; Green, James S A

    2016-06-01

    To develop a tool for independent observational assessment of cancer multidisciplinary team meetings (MDMs), and test criterion validity, inter-rater reliability/agreement and describe performance. Clinicians and experts in teamwork used a mixed-methods approach to develop and refine the tool. Study 1 observers rated pre-determined optimal/sub-optimal MDM film excerpts and Study 2 observers independently rated video-recordings of 10 MDMs. Study 2 included 10 cancer MDMs in England. Testing was undertaken by 13 health service staff and a clinical and non-clinical observer. None. Tool development, validity, reliability/agreement and variability in MDT performance. Study 1: Observers were able to discriminate between optimal and sub-optimal MDM performance (P ≤ 0.05). Study 2: Inter-rater reliability was good for 3/10 domains. Percentage of absolute agreement was high (≥80%) for 4/10 domains and percentage agreement within 1 point was high for 9/10 domains. Four MDTs performed well (scored 3+ in at least 8/10 domains), 5 MDTs performed well in 6-7 domains and 1 MDT performed well in only 4 domains. Leadership and chairing of the meeting, the organization and administration of the meeting, and clinical decision-making processes all varied significantly between MDMs (P ≤ 0.01). MDT-MOT demonstrated good criterion validity. Agreement between clinical and non-clinical observers (within one point on the scale) was high but this was inconsistent with reliability coefficients and warrants further investigation. If further validated MDT-MOT might provide a useful mechanism for the routine assessment of MDMs by the local workforce to drive improvements in MDT performance. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.

  19. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  20. Multiobjective hyper heuristic scheme for system design and optimization

    NASA Astrophysics Data System (ADS)

    Rafique, Amer Farhan

    2012-11-01

    As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.

Top