Does unbelted safety requirement affect protection for belted occupants?
Hu, Jingwen; Klinich, Kathleen D; Manary, Miriam A; Flannagan, Carol A C; Narayanaswamy, Prabha; Reed, Matthew P; Andreen, Margaret; Neal, Mark; Lin, Chin-Hsu
2017-05-29
Federal regulations in the United States require vehicles to meet occupant performance requirements with unbelted test dummies. Removing the test requirements with unbelted occupants might encourage the deployment of seat belt interlocks and allow restraint optimization to focus on belted occupants. The objective of this study is to compare the performance of restraint systems optimized for belted-only occupants with those optimized for both belted and unbelted occupants using computer simulations and field crash data analyses. In this study, 2 validated finite element (FE) vehicle/occupant models (a midsize sedan and a midsize SUV) were selected. Restraint design optimizations under standardized crash conditions (U.S.-NCAP and FMVSS 208) with and without unbelted requirements were conducted using Hybrid III (HIII) small female and midsize male anthropomorphic test devices (ATDs) in both vehicles on both driver and right front passenger positions. A total of 10 to 12 design parameters were varied in each optimization using a combination of response surface method (RSM) and genetic algorithm. To evaluate the field performance of restraints optimized with and without unbelted requirements, 55 frontal crash conditions covering a greater variety of crash types than those in the standardized crashes were selected. A total of 1,760 FE simulations were conducted for the field performance evaluation. Frontal crashes in the NASS-CDS database from 2002 to 2012 were used to develop injury risk curves and to provide the baseline performance of current restraint system and estimate the injury risk change by removing the unbelted requirement. Unbelted requirements do not affect the optimal seat belt and airbag design parameters in 3 out of 4 vehicle/occupant position conditions, except for the SUV passenger side. Overall, compared to the optimal designs with unbelted requirements, optimal designs without unbelted requirements generated the same or lower total injury risks for belted occupants depending on statistical methods used for the analysis, but they could also increase the total injury risks for unbelted occupants. This study demonstrated potential for reducing injury risks to belted occupants if the unbelted requirements are eliminated. Further investigations are necessary to confirm these findings.
Turbine Performance Optimization Task Status
NASA Technical Reports Server (NTRS)
Griffin, Lisa W.; Turner, James E. (Technical Monitor)
2001-01-01
Capability to optimize for turbine performance and accurately predict unsteady loads will allow for increased reliability, Isp, and thrust-to-weight. The development of a fast, accurate aerodynamic design, analysis, and optimization system is required.
Improving scanner wafer alignment performance by target optimization
NASA Astrophysics Data System (ADS)
Leray, Philippe; Jehoul, Christiane; Socha, Robert; Menchtchikov, Boris; Raghunathan, Sudhar; Kent, Eric; Schoonewelle, Hielke; Tinnemans, Patrick; Tuffy, Paul; Belen, Jun; Wise, Rich
2016-03-01
In the process nodes of 10nm and below, the patterning complexity along with the processing and materials required has resulted in a need to optimize alignment targets in order to achieve the required precision, accuracy and throughput performance. Recent industry publications on the metrology target optimization process have shown a move from the expensive and time consuming empirical methodologies, towards a faster computational approach. ASML's Design for Control (D4C) application, which is currently used to optimize YieldStar diffraction based overlay (DBO) metrology targets, has been extended to support the optimization of scanner wafer alignment targets. This allows the necessary process information and design methodology, used for DBO target designs, to be leveraged for the optimization of alignment targets. In this paper, we show how we applied this computational approach to wafer alignment target design. We verify the correlation between predictions and measurements for the key alignment performance metrics and finally show the potential alignment and overlay performance improvements that an optimized alignment target could achieve.
Performance optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.
1991-01-01
As part of a center-wide activity at NASA Langley Research Center to develop multidisciplinary design procedures by accounting for discipline interactions, a performance design optimization procedure is developed. The procedure optimizes the aerodynamic performance of rotor blades by selecting the point of taper initiation, root chord, taper ratio, and maximum twist which minimize hover horsepower while not degrading forward flight performance. The procedure uses HOVT (a strip theory momentum analysis) to compute the horse power required for hover and the comprehensive helicopter analysis program CAMRAD to compute the horsepower required for forward flight and maneuver. The optimization algorithm consists of the general purpose optimization program CONMIN and approximate analyses. Sensitivity analyses consisting of derivatives of the objective function and constraints are carried out by forward finite differences. The procedure is applied to a test problem which is an analytical model of a wind tunnel model of a utility rotor blade.
NASA Astrophysics Data System (ADS)
Karri, Naveen K.; Mo, Changki
2018-06-01
Structural reliability of thermoelectric generation (TEG) systems still remains an issue, especially for applications such as large-scale industrial or automobile exhaust heat recovery, in which TEG systems are subject to dynamic loads and thermal cycling. Traditional thermoelectric (TE) system design and optimization techniques, focused on performance alone, could result in designs that may fail during operation as the geometric requirements for optimal performance (especially the power) are often in conflict with the requirements for mechanical reliability. This study focused on reducing the thermomechanical stresses in a TEG system without compromising the optimized system performance. Finite element simulations were carried out to study the effect of TE element (leg) geometry such as leg length and cross-sectional shape under constrained material volume requirements. Results indicated that the element length has a major influence on the element stresses whereas regular cross-sectional shapes have minor influence. The impact of TE element stresses on the mechanical reliability is evaluated using brittle material failure theory based on Weibull analysis. An alternate couple configuration that relies on the industry practice of redundant element design is investigated. Results showed that the alternate configuration considerably reduced the TE element and metallization stresses, thereby enhancing the structural reliability, with little trade-off in the optimized performance. The proposed alternate configuration could serve as a potential design modification for improving the reliability of systems optimized for thermoelectric performance.
Performance Optimizing Adaptive Control with Time-Varying Reference Model Modification
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Hashemi, Kelley E.
2017-01-01
This paper presents a new adaptive control approach that involves a performance optimization objective. The control synthesis involves the design of a performance optimizing adaptive controller from a subset of control inputs. The resulting effect of the performance optimizing adaptive controller is to modify the initial reference model into a time-varying reference model which satisfies the performance optimization requirement obtained from an optimal control problem. The time-varying reference model modification is accomplished by the real-time solutions of the time-varying Riccati and Sylvester equations coupled with the least-squares parameter estimation of the sensitivities of the performance metric. The effectiveness of the proposed method is demonstrated by an application of maneuver load alleviation control for a flexible aircraft.
The automatic neutron guide optimizer guide_bot
NASA Astrophysics Data System (ADS)
Bertelsen, Mads
2017-09-01
The guide optimization software guide_bot is introduced, the main purpose of which is to reduce the time spent programming when performing numerical optimization of neutron guides. A limited amount of information on the overall guide geometry and a figure of merit describing the desired beam is used to generate the code necessary to solve the problem. A generated McStas instrument file performs the Monte Carlo ray-tracing, which is controlled by iFit optimization scripts. The resulting optimal guide is thoroughly characterized, both in terms of brilliance transfer from an idealized source and on a more realistic source such as the ESS Butterfly moderator. Basic MATLAB knowledge is required from the user, but no experience with McStas or iFit is necessary. This paper briefly describes how guide_bot is used and some important aspects of the code. A short validation against earlier work is performed which shows the expected agreement. In addition a scan over the vertical divergence requirement, where individual guide optimizations are performed for each corresponding figure of merit, provides valuable data on the consequences of this parameter. The guide_bot software package is best suited for the start of an instrument design project as it excels at comparing a large amount of different guide alternatives for a specific set of instrument requirements, but is still applicable in later stages as constraints can be used to optimize more specific guides.
Scheduler Design Criteria: Requirements and Considerations
NASA Technical Reports Server (NTRS)
Lee, Hanbong
2016-01-01
This presentation covers fundamental requirements and considerations for developing schedulers in airport operations. We first introduce performance and functional requirements for airport surface schedulers. Among various optimization problems in airport operations, we focus on airport surface scheduling problem, including runway and taxiway operations. We then describe a basic methodology for airport surface scheduling such as node-link network model and scheduling algorithms previously developed. Next, we explain how to design a mathematical formulation in more details, which consists of objectives, decision variables, and constraints. Lastly, we review other considerations, including optimization tools, computational performance, and performance metrics for evaluation.
A survey of compiler optimization techniques
NASA Technical Reports Server (NTRS)
Schneck, P. B.
1972-01-01
Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.
Thermal/structural Tailoring of Engine Blades (T/STAEBL) User's Manual
NASA Technical Reports Server (NTRS)
Brown, K. W.; Clevenger, W. B.; Arel, J. D.
1994-01-01
The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a family of computer programs executed by a control program. The T/STAEBL system performs design optimizations of cooled, hollow turbine blades and vanes. This manual contains an overview of the system, fundamentals of the data block structure, and detailed descriptions of the inputs required by the optimizer. Additionally, the thermal analysis input requirements are described as well as the inputs required to perform a finite element blade vibrations analysis.
NASA Technical Reports Server (NTRS)
Holmes, B. J.
1980-01-01
A design study has been conducted to optimize a single-engine airplane for a high-performance cruise mission. The mission analyzed included a cruise speed of about 300 knots, a cruise range of about 1300 nautical miles, and a six-passenger payload (5340 N (1200 lb)). The purpose of the study is to investigate the combinations of wing design, engine, and operating altitude required for the mission. The results show that these mission performance characteristics can be achieved with fuel efficiencies competitive with present-day high-performance, single- and twin-engine, business airplanes. It is noted that relaxation of the present Federal Aviation Regulation, Part 23, stall-speed requirement for single-engine airplanes facilitates the optimization of the airplane for fuel efficiency.
NASA Technical Reports Server (NTRS)
Williams, Jacob; Davis, Elizabeth C.; Lee, David E.; Condon, Gerald L.; Dawn, Tim
2009-01-01
The Orion spacecraft will be required to perform a three-burn trans-Earth injection (TEI) maneuver sequence to return to Earth from low lunar orbit. The origin of this approach lies in the Constellation Program requirements for access to any lunar landing site location combined with anytime lunar departure. This paper documents the development of optimized databases used to rapidly model the performance requirements of the TEI three-burn sequence for an extremely large number of mission cases. It also discusses performance results for lunar departures covering a complete 18.6 year lunar nodal cycle as well as general characteristics of the optimized three-burn TEI sequence.
Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.
1992-01-01
This paper describes a fully integrated aerodynamic/dynamic optimization procedure for helicopter rotor blades. The procedure combines performance and dynamics analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuver; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case the objective function involves power required (in hover, forward flight, and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.
Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.
1992-01-01
A fully integrated aerodynamic/dynamic optimization procedure is described for helicopter rotor blades. The procedure combines performance and dynamic analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuvers; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case, the objective function involves power required (in hover, forward flight and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.
The Role of Efficient XML Interchange (EXI) in Navy Wide-Area Network (WAN) Optimization
2015-03-01
compress, and re-encrypt data to continue providing optimization through compression; however, that capability requires careful consideration of...optimization 23 of encrypted data requires a careful analysis and comparison of performance improvements and IA vulnerabilities. It is important...Contained EXI capitalizes on multiple techniques to improve compression, and they vary depending on a set of EXI options passed to the codec
In-flight performance optimization for rotorcraft with redundant controls
NASA Astrophysics Data System (ADS)
Ozdemir, Gurbuz Taha
A conventional helicopter has limits on performance at high speeds because of the limitations of main rotor, such as compressibility issues on advancing side or stall issues on retreating side. Auxiliary lift and thrust components have been suggested to improve performance of the helicopter substantially by reducing the loading on the main rotor. Such a configuration is called the compound rotorcraft. Rotor speed can also be varied to improve helicopter performance. In addition to improved performance, compound rotorcraft and variable RPM can provide a much larger degree of control redundancy. This additional redundancy gives the opportunity to further enhance performance and handling qualities. A flight control system is designed to perform in-flight optimization of redundant control effectors on a compound rotorcraft in order to minimize power required and extend range. This "Fly to Optimal" (FTO) control law is tested in simulation using the GENHEL model. A model of the UH-60, a compound version of the UH-60A with lifting wing and vectored thrust ducted propeller (VTDP), and a generic compound version of the UH-60A with lifting wing and propeller were developed and tested in simulation. A model following dynamic inversion controller is implemented for inner loop control of roll, pitch, yaw, heave, and rotor RPM. An outer loop controller regulates airspeed and flight path during optimization. A Golden Section search method was used to find optimal rotor RPM on a conventional helicopter, where the single redundant control effector is rotor RPM. The FTO builds off of the Adaptive Performance Optimization (APO) method of Gilyard by performing low frequency sweeps on a redundant control for a fixed wing aircraft. A method based on the APO method was used to optimize trim on a compound rotorcraft with several redundant control effectors. The controller can be used to optimize rotor RPM and compound control effectors through flight test or simulations in order to establish a schedule. The method has been expanded to search a two-dimensional control space. Simulation results demonstrate the ability to maximize range by optimizing stabilator deflection and an airspeed set point. Another set of results minimize power required in high speed flight by optimizing collective pitch and stabilator deflection. Results show that the control laws effectively hold the flight condition while the FTO method is effective at improving performance. Optimizations show there can be issues when the control laws regulating altitude push the collective control towards it limits. So a modification was made to the control law to regulate airspeed and altitude using propeller pitch and angle of attack while the collective is held fixed or used as an optimization variable. A dynamic trim limit avoidance algorithm is applied to avoid control saturation in other axes during optimization maneuvers. Range and power optimization FTO simulations are compared with comprehensive sweeps of trim solutions and FTO optimization shown to be effective and reliable in reaching an optimal when optimizing up to two redundant controls. Use of redundant controls is shown to be beneficial for improving performance. The search method takes almost 25 minutes of simulated flight for optimization to be complete. The optimization maneuver itself can sometimes drive the power required to high values, so a power limit is imposed to restrict the search to avoid conditions where power is more than5% higher than that of the initial trim state. With this modification, the time the optimization maneuver takes to complete is reduced down to 21 minutes without any significant change in the optimal power value.
Display/control requirements for automated VTOL aircraft
NASA Technical Reports Server (NTRS)
Hoffman, W. C.; Kleinman, D. L.; Young, L. R.
1976-01-01
A systematic design methodology for pilot displays in advanced commercial VTOL aircraft was developed and refined. The analyst is provided with a step-by-step procedure for conducting conceptual display/control configurations evaluations for simultaneous monitoring and control pilot tasks. The approach consists of three phases: formulation of information requirements, configuration evaluation, and system selection. Both the monitoring and control performance models are based upon the optimal control model of the human operator. Extensions to the conventional optimal control model required in the display design methodology include explicit optimization of control/monitoring attention; simultaneous monitoring and control performance predictions; and indifference threshold effects. The methodology was applied to NASA's experimental CH-47 helicopter in support of the VALT program. The CH-47 application examined the system performance of six flight conditions. Four candidate configurations are suggested for evaluation in pilot-in-the-loop simulations and eventual flight tests.
NASA Astrophysics Data System (ADS)
Feng, Jianjun; Li, Chengzhe; Wu, Zhi
2017-08-01
As an important part of the valve opening and closing controller in engine, camshaft has high machining accuracy requirement in designing. Taking the high-speed camshaft grinder spindle system as the research object and the spindle system performance as the optimizing target, this paper firstly uses Solidworks to establish the three-dimensional finite element model (FEM) of spindle system, then conducts static analysis and the modal analysis by applying the established FEM in ANSYS Workbench, and finally uses the design optimization function of the ANSYS Workbench to optimize the structure parameter in the spindle system. The study results prove that the design of the spindle system fully meets the production requirements, and the performance of the optimized spindle system is promoted. Besides, this paper provides an analysis and optimization method for other grinder spindle systems.
NASA Astrophysics Data System (ADS)
Chen, Ting; Van Den Broeke, Doug; Hsu, Stephen; Hsu, Michael; Park, Sangbong; Berger, Gabriel; Coskun, Tamer; de Vocht, Joep; Chen, Fung; Socha, Robert; Park, JungChul; Gronlund, Keith
2005-11-01
Illumination optimization, often combined with optical proximity corrections (OPC) to the mask, is becoming one of the critical components for a production-worthy lithography process for 55nm-node DRAM/Flash memory devices and beyond. At low-k1, e.g. k1<0.31, both resolution and imaging contrast can be severely limited by the current imaging tools while using the standard illumination sources. Illumination optimization is a process where the source shape is varied, in both profile and intensity distribution, to achieve enhancement in the final image contrast as compared to using the non-optimized sources. The optimization can be done efficiently for repetitive patterns such as DRAM/Flash memory cores. However, illumination optimization often produces source shapes that are "free-form" like and they can be too complex to be directly applicable for production and lack the necessary radial and annular symmetries desirable for the diffractive optical element (DOE) based illumination systems in today's leading lithography tools. As a result, post-optimization rendering and verification of the optimized source shape are often necessary to meet the production-ready or manufacturability requirements and ensure optimal performance gains. In this work, we describe our approach to the illumination optimization for k1<0.31 DRAM/Flash memory patterns, using an ASML XT:1400i at NA 0.93, where the all necessary manufacturability requirements are fully accounted for during the optimization. The imaging contrast in the resist is optimized in a reduced solution space constrained by the manufacturability requirements, which include minimum distance between poles, minimum opening pole angles, minimum ring width and minimum source filling factor in the sigma space. For additional performance gains, the intensity within the optimized source can vary in a gray-tone fashion (eight shades used in this work). Although this new optimization approach can sometimes produce closely spaced solutions as gauged by the NILS based metrics, we show that the optimal and production-ready source shape solution can be easily determined by comparing the best solutions to the "free-form" solution and more importantly, by their respective imaging fidelity and process latitude ranking. Imaging fidelity and process latitude simulations are performed to analyze the impact and sensitivity of the manufacturability requirements on pattern specific illumination optimizations using ASML XT:1400i and other latest imaging systems. Mask model based OPC (MOPC) is applied and optimized sequentially to ensure that the CD uniformity requirements are met.
NASA Astrophysics Data System (ADS)
Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry
1998-08-01
All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
The effect of code expanding optimizations on instruction cache design
NASA Technical Reports Server (NTRS)
Chen, William Y.; Chang, Pohua P.; Conte, Thomas M.; Hwu, Wen-Mei W.
1991-01-01
It is shown that code expanding optimizations have strong and non-intuitive implications on instruction cache design. Three types of code expanding optimizations are studied: instruction placement, function inline expansion, and superscalar optimizations. Overall, instruction placement reduces the miss ratio of small caches. Function inline expansion improves the performance for small cache sizes, but degrades the performance of medium caches. Superscalar optimizations increases the cache size required for a given miss ratio. On the other hand, they also increase the sequentiality of instruction access so that a simple load-forward scheme effectively cancels the negative effects. Overall, it is shown that with load forwarding, the three types of code expanding optimizations jointly improve the performance of small caches and have little effect on large caches.
Xu, Gang; Liang, Xifeng; Yao, Shuanbao; Chen, Dawei; Li, Zhiwei
2017-01-01
Minimizing the aerodynamic drag and the lift of the train coach remains a key issue for high-speed trains. With the development of computing technology and computational fluid dynamics (CFD) in the engineering field, CFD has been successfully applied to the design process of high-speed trains. However, developing a new streamlined shape for high-speed trains with excellent aerodynamic performance requires huge computational costs. Furthermore, relationships between multiple design variables and the aerodynamic loads are seldom obtained. In the present study, the Kriging surrogate model is used to perform a multi-objective optimization of the streamlined shape of high-speed trains, where the drag and the lift of the train coach are the optimization objectives. To improve the prediction accuracy of the Kriging model, the cross-validation method is used to construct the optimal Kriging model. The optimization results show that the two objectives are efficiently optimized, indicating that the optimization strategy used in the present study can greatly improve the optimization efficiency and meet the engineering requirements.
Theory and computation of optimal low- and medium-thrust transfers
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1994-01-01
This report presents two numerical methods considered for the computation of fuel-optimal, low-thrust orbit transfers in large numbers of burns. The origins of these methods are observations made with the extremal solutions of transfers in small numbers of burns; there seems to exist a trend such that the longer the time allowed to perform an optimal transfer the less fuel that is used. These longer transfers are obviously of interest since they require a motor of low thrust; however, we also find a trend that the longer the time allowed to perform the optimal transfer the more burns are required to satisfy optimality. Unfortunately, this usually increases the difficulty of computation. Both of the methods described use small-numbered burn solutions to determine solutions in large numbers of burns. One method is a homotopy method that corrects for problems that arise when a solution requires a new burn or coast arc for optimality. The other method is to simply patch together long transfers from smaller ones. An orbit correction problem is solved to develop this method. This method may also lead to a good guidance law for transfer orbits with long transfer times.
Shape Optimization and Modular Discretization for the Development of a Morphing Wingtip
NASA Astrophysics Data System (ADS)
Morley, Joshua
Better knowledge in the areas of aerodynamics and optimization has allowed designers to develop efficient wingtip structures in recent years. However, the requirements faced by wingtip devices can be considerably different amongst an aircraft's flight regimes. Traditional static wingtip devices are then a compromise between conflicting requirements, resulting in less than optimal performance within each regime. Alternatively, a morphing wingtip can reconfigure leading to improved performance over a range of dissimilar flight conditions. Developed within this thesis, is a modular morphing wingtip concept that centers on the use of variable geometry truss mechanisms to permit morphing. A conceptual design framework is established to aid in the development of the concept. The framework uses a metaheuristic optimization procedure to determine optimal continuous wingtip configurations. The configurations are then discretized for the modular concept. The functionality of the framework is demonstrated through a design study on a hypothetical wing/winglet within the thesis.
Multi-disciplinary optimization of aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1990-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Multidisciplinary optimization of aeroservoelastic systems using reduced-size models
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1992-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Optimization and evaluation of a proportional derivative controller for planar arm movement.
Jagodnik, Kathleen M; van den Bogert, Antonie J
2010-04-19
In most clinical applications of functional electrical stimulation (FES), the timing and amplitude of electrical stimuli have been controlled by open-loop pattern generators. The control of upper extremity reaching movements, however, will require feedback control to achieve the required precision. Here we present three controllers using proportional derivative (PD) feedback to stimulate six arm muscles, using two joint angle sensors. Controllers were first optimized and then evaluated on a computational arm model that includes musculoskeletal dynamics. Feedback gains were optimized by minimizing a weighted sum of position errors and muscle forces. Generalizability of the controllers was evaluated by performing movements for which the controller was not optimized, and robustness was tested via model simulations with randomly weakened muscles. Robustness was further evaluated by adding joint friction and doubling the arm mass. After optimization with a properly weighted cost function, all PD controllers performed fast, accurate, and robust reaching movements in simulation. Oscillatory behavior was seen after improper tuning. Performance improved slightly as the complexity of the feedback gain matrix increased. Copyright 2009 Elsevier Ltd. All rights reserved.
Optimization and evaluation of a proportional derivative controller for planar arm movement
Jagodnik, Kathleen M.; van den Bogert, Antonie J.
2013-01-01
In most clinical applications of functional electrical stimulation (FES), the timing and amplitude of electrical stimuli have been controlled by open-loop pattern generators. The control of upper extremity reaching movements, however, will require feedback control to achieve the required precision. Here we present three controllers using proportional derivative (PD) feedback to stimulate six arm muscles, using two joint angle sensors. Controllers were first optimized and then evaluated on a computational arm model that includes musculoskeletal dynamics. Feedback gains were optimized by minimizing a weighted sum of position errors and muscle forces. Generalizability of the controllers was evaluated by performing movements for which the controller was not optimized, and robustness was tested via model simulations with randomly weakened muscles. Robustness was further evaluated by adding joint friction and doubling the arm mass. After optimization with a properly weighted cost function, all PD controllers performed fast, accurate, and robust reaching movements in simulation. Oscillatory behavior was seen after improper tuning. Performance improved slightly as the complexity of the feedback gain matrix increased. PMID:20097345
Use of constrained optimization in the conceptual design of a medium-range subsonic transport
NASA Technical Reports Server (NTRS)
Sliwa, S. M.
1980-01-01
Constrained parameter optimization was used to perform the optimal conceptual design of a medium range transport configuration. The impact of choosing a given performance index was studied, and the required income for a 15 percent return on investment was proposed as a figure of merit. A number of design constants and constraint functions were systematically varied to document the sensitivities of the optimal design to a variety of economic and technological assumptions. A comparison was made for each of the parameter variations between the baseline configuration and the optimally redesigned configuration.
Continuous performance measurement in flight systems. [sequential control model
NASA Technical Reports Server (NTRS)
Connelly, E. M.; Sloan, N. A.; Zeskind, R. M.
1975-01-01
The desired response of many man machine control systems can be formulated as a solution to an optimal control synthesis problem where the cost index is given and the resulting optimal trajectories correspond to the desired trajectories of the man machine system. Optimal control synthesis provides the reference criteria and the significance of error information required for performance measurement. The synthesis procedure described provides a continuous performance measure (CPM) which is independent of the mechanism generating the control action. Therefore, the technique provides a meaningful method for online evaluation of man's control capability in terms of total man machine performance.
Optimal design of a hybrid MR brake for haptic wrist application
NASA Astrophysics Data System (ADS)
Nguyen, Quoc Hung; Nguyen, Phuong Bac; Choi, Seung-Bok
2011-03-01
In this work, a new configuration of a magnetorheological (MR) brake is proposed and an optimal design of the proposed MR brake for haptic wrist application is performed considering the required braking torque, the zero-field friction torque, the size and mass of the brake. The proposed MR brake configuration is a combination of disc-type and drum-type which is referred as a hybrid configuration in this study. After the MR brake with the hybrid configuration is proposed, braking torque of the brake is analyzed based on Bingham rheological model of the MR fluid. The zero-field friction torque of the MR brake is also obtained. An optimization procedure based on finite element analysis integrated with an optimization tool is developed for the MR brake. The purpose of the optimal design is to find the optimal geometric dimensions of the MR brake structure that can produce the required braking torque and minimize the uncontrollable torque (passive torque) of the haptic wrist. Based on developed optimization procedure, optimal solution of the proposed MR brake is achieved. The proposed optimized hybrid brake is then compared with conventional types of MR brake and discussions on working performance of the proposed MR brake are described.
NASA Astrophysics Data System (ADS)
Umbarkar, A. J.; Balande, U. T.; Seth, P. D.
2017-06-01
The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.
An approach for aerodynamic optimization of transonic fan blades
NASA Astrophysics Data System (ADS)
Khelghatibana, Maryam
Aerodynamic design optimization of transonic fan blades is a highly challenging problem due to the complexity of flow field inside the fan, the conflicting design requirements and the high-dimensional design space. In order to address all these challenges, an aerodynamic design optimization method is developed in this study. This method automates the design process by integrating a geometrical parameterization method, a CFD solver and numerical optimization methods that can be applied to both single and multi-point optimization design problems. A multi-level blade parameterization is employed to modify the blade geometry. Numerical analyses are performed by solving 3D RANS equations combined with SST turbulence model. Genetic algorithms and hybrid optimization methods are applied to solve the optimization problem. In order to verify the effectiveness and feasibility of the optimization method, a singlepoint optimization problem aiming to maximize design efficiency is formulated and applied to redesign a test case. However, transonic fan blade design is inherently a multi-faceted problem that deals with several objectives such as efficiency, stall margin, and choke margin. The proposed multi-point optimization method in the current study is formulated as a bi-objective problem to maximize design and near-stall efficiencies while maintaining the required design pressure ratio. Enhancing these objectives significantly deteriorate the choke margin, specifically at high rotational speeds. Therefore, another constraint is embedded in the optimization problem in order to prevent the reduction of choke margin at high speeds. Since capturing stall inception is numerically very expensive, stall margin has not been considered as an objective in the problem statement. However, improving near-stall efficiency results in a better performance at stall condition, which could enhance the stall margin. An investigation is therefore performed on the Pareto-optimal solutions to demonstrate the relation between near-stall efficiency and stall margin. The proposed method is applied to redesign NASA rotor 67 for single and multiple operating conditions. The single-point design optimization showed +0.28 points improvement of isentropic efficiency at design point, while the design pressure ratio and mass flow are, respectively, within 0.12% and 0.11% of the reference blade. Two cases of multi-point optimization are performed: First, the proposed multi-point optimization problem is relaxed by removing the choke margin constraint in order to demonstrate the relation between near-stall efficiency and stall margin. An investigation on the Pareto-optimal solutions of this optimization shows that the stall margin has been increased with improving near-stall efficiency. The second multi-point optimization case is performed with considering all the objectives and constraints. One selected optimized design on the Pareto front presents +0.41, +0.56 and +0.9 points improvement in near-peak efficiency, near-stall efficiency and stall margin, respectively. The design pressure ratio and mass flow are, respectively, within 0.3% and 0.26% of the reference blade. Moreover the optimized design maintains the required choking margin. Detailed aerodynamic analyses are performed to investigate the effect of shape optimization on shock occurrence, secondary flows, tip leakage and shock/tip-leakage interactions in both single and multi-point optimizations.
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
Direct adaptive performance optimization of subsonic transports: A periodic perturbation technique
NASA Technical Reports Server (NTRS)
Espana, Martin D.; Gilyard, Glenn
1995-01-01
Aircraft performance can be optimized at the flight condition by using available redundancy among actuators. Effective use of this potential allows improved performance beyond limits imposed by design compromises. Optimization based on nominal models does not result in the best performance of the actual aircraft at the actual flight condition. An adaptive algorithm for optimizing performance parameters, such as speed or fuel flow, in flight based exclusively on flight data is proposed. The algorithm is inherently insensitive to model inaccuracies and measurement noise and biases and can optimize several decision variables at the same time. An adaptive constraint controller integrated into the algorithm regulates the optimization constraints, such as altitude or speed, without requiring and prior knowledge of the autopilot design. The algorithm has a modular structure which allows easy incorporation (or removal) of optimization constraints or decision variables to the optimization problem. An important part of the contribution is the development of analytical tools enabling convergence analysis of the algorithm and the establishment of simple design rules. The fuel-flow minimization and velocity maximization modes of the algorithm are demonstrated on the NASA Dryden B-720 nonlinear flight simulator for the single- and multi-effector optimization cases.
Design optimization of hydraulic turbine draft tube based on CFD and DOE method
NASA Astrophysics Data System (ADS)
Nam, Mun chol; Dechun, Ba; Xiangji, Yue; Mingri, Jin
2018-03-01
In order to improve performance of the hydraulic turbine draft tube in its design process, the optimization for draft tube is performed based on multi-disciplinary collaborative design optimization platform by combining the computation fluid dynamic (CFD) and the design of experiment (DOE) in this paper. The geometrical design variables are considered as the median section in the draft tube and the cross section in its exit diffuser and objective function is to maximize the pressure recovery factor (Cp). Sample matrixes required for the shape optimization of the draft tube are generated by optimal Latin hypercube (OLH) method of the DOE technique and their performances are evaluated through computational fluid dynamic (CFD) numerical simulation. Subsequently the main effect analysis and the sensitivity analysis of the geometrical parameters of the draft tube are accomplished. Then, the design optimization of the geometrical design variables is determined using the response surface method. The optimization result of the draft tube shows a marked performance improvement over the original.
NASA Astrophysics Data System (ADS)
Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon
2016-03-01
In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.
NASA Astrophysics Data System (ADS)
Haagmans, G. G.; Verhagen, S.; Voûte, R. L.; Verbree, E.
2017-09-01
Since GPS tends to fail for indoor positioning purposes, alternative methods like indoor positioning systems (IPS) based on Bluetooth low energy (BLE) are developing rapidly. Generally, IPS are deployed in environments covered with obstacles such as furniture, walls, people and electronics influencing the signal propagation. The major factor influencing the system performance and to acquire optimal positioning results is the geometry of the beacons. The geometry of the beacons is limited to the available infrastructure that can be deployed (number of beacons, basestations and tags), which leads to the following challenge: Given a limited number of beacons, where should they be placed in a specified indoor environment, such that the geometry contributes to optimal positioning results? This paper aims to propose a statistical model that is able to select the optimal configuration that satisfies the user requirements in terms of precision. The model requires the definition of a chosen 3D space (in our case 7 × 10 × 6 meter), number of beacons, possible user tag locations and a performance threshold (e.g. required precision). For any given set of beacon and receiver locations, the precision, internal- and external reliability can be determined on forehand. As validation, the modeled precision has been compared with observed precision results. The measurements have been performed with an IPS of BlooLoc at a chosen set of user tag locations for a given geometric configuration. Eventually, the model is able to select the optimal geometric configuration out of millions of possible configurations based on a performance threshold (e.g. required precision).
Using string invariants for prediction searching for optimal parameters
NASA Astrophysics Data System (ADS)
Bundzel, Marek; Kasanický, Tomáš; Pinčák, Richard
2016-02-01
We have developed a novel prediction method based on string invariants. The method does not require learning but a small set of parameters must be set to achieve optimal performance. We have implemented an evolutionary algorithm for the parametric optimization. We have tested the performance of the method on artificial and real world data and compared the performance to statistical methods and to a number of artificial intelligence methods. We have used data and the results of a prediction competition as a benchmark. The results show that the method performs well in single step prediction but the method's performance for multiple step prediction needs to be improved. The method works well for a wide range of parameters.
Initial Ares I Bending Filter Design
NASA Technical Reports Server (NTRS)
Jang, Jiann-Woei; Bedrossian, Nazareth; Hall, Robert; Norris, H. Lee; Hall, Charles; Jackson, Mark
2007-01-01
The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output will be required to ensure control system stability and adequate performance. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The filter design methodology was based on a numerical constrained optimization approach to maximize stability margins while meeting performance requirements. The resulting bending filter designs achieved stability by adding lag to the first structural frequency and hence phase stabilizing the first Ares-I flex mode. To minimize rigid body performance impacts, a priority was placed via constraints in the optimization algorithm to minimize bandwidth decrease with the addition of the bending filters. The bending filters provided here have been demonstrated to provide a stable first stage control system in both the frequency domain and the MSFC MAVERIC time domain simulation.
Pulsed Inductive Plasma Acceleration: Performance Optimization Criteria
NASA Technical Reports Server (NTRS)
Polzin, Kurt A.
2014-01-01
Optimization criteria for pulsed inductive plasma acceleration are developed using an acceleration model consisting of a set of coupled circuit equations describing the time-varying current in the thruster and a one-dimensional momentum equation. The model is nondimensionalized, resulting in the identification of several scaling parameters that are varied to optimize the performance of the thruster. The analysis reveals the benefits of underdamped current waveforms and leads to a performance optimization criterion that requires the matching of the natural period of the discharge and the acceleration timescale imposed by the inertia of the working gas. In addition, the performance increases when a greater fraction of the propellant is initially located nearer to the inductive acceleration coil. While the dimensionless model uses a constant temperature formulation in calculating performance, the scaling parameters that yield the optimum performance are shown to be relatively invariant if a self-consistent description of energy in the plasma is instead used.
Robustness Analysis and Optimally Robust Control Design via Sum-of-Squares
NASA Technical Reports Server (NTRS)
Dorobantu, Andrei; Crespo, Luis G.; Seiler, Peter J.
2012-01-01
A control analysis and design framework is proposed for systems subject to parametric uncertainty. The underlying strategies are based on sum-of-squares (SOS) polynomial analysis and nonlinear optimization to design an optimally robust controller. The approach determines a maximum uncertainty range for which the closed-loop system satisfies a set of stability and performance requirements. These requirements, de ned as inequality constraints on several metrics, are restricted to polynomial functions of the uncertainty. To quantify robustness, SOS analysis is used to prove that the closed-loop system complies with the requirements for a given uncertainty range. The maximum uncertainty range, calculated by assessing a sequence of increasingly larger ranges, serves as a robustness metric for the closed-loop system. To optimize the control design, nonlinear optimization is used to enlarge the maximum uncertainty range by tuning the controller gains. Hence, the resulting controller is optimally robust to parametric uncertainty. This approach balances the robustness margins corresponding to each requirement in order to maximize the aggregate system robustness. The proposed framework is applied to a simple linear short-period aircraft model with uncertain aerodynamic coefficients.
A Doppler centroid estimation algorithm for SAR systems optimized for the quasi-homogeneous source
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1989-01-01
Radar signal processing applications frequently require an estimate of the Doppler centroid of a received signal. The Doppler centroid estimate is required for synthetic aperture radar (SAR) processing. It is also required for some applications involving target motion estimation and antenna pointing direction estimation. In some cases, the Doppler centroid can be accurately estimated based on available information regarding the terrain topography, the relative motion between the sensor and the terrain, and the antenna pointing direction. Often, the accuracy of the Doppler centroid estimate can be improved by analyzing the characteristics of the received SAR signal. This kind of signal processing is also referred to as clutterlock processing. A Doppler centroid estimation (DCE) algorithm is described which contains a linear estimator optimized for the type of terrain surface that can be modeled by a quasi-homogeneous source (QHS). Information on the following topics is presented: (1) an introduction to the theory of Doppler centroid estimation; (2) analysis of the performance characteristics of previously reported DCE algorithms; (3) comparison of these analysis results with experimental results; (4) a description and performance analysis of a Doppler centroid estimator which is optimized for a QHS; and (5) comparison of the performance of the optimal QHS Doppler centroid estimator with that of previously reported methods.
DfM requirements and ROI analysis for system-on-chip
NASA Astrophysics Data System (ADS)
Balasinski, Artur
2005-11-01
DfM (Design-for-Manufacturability) has become staple requirement beyond 100 nm technology node for efficient generation of mask data, cost reduction, and optimal circuit performance. Layout pattern has to comply to many requirements pertaining to database structure and complexity, suitability for image enhancement by the optical proximity correction, and mask data pattern density and distribution over the image field. These requirements are of particular complexity for Systems-on-Chip (SoC). A number of macro-, meso-, and microscopic effects such as reticle macroloading, planarization dishing, and pattern bridging or breaking would compromise fab yield, device performance, or both. In order to determine the optimal set of DfM rules applicable to the particular designs, Return-on-Investment and Failure Mode and Effect Analysis (FMEA) are proposed.
Neural Net-Based Redesign of Transonic Turbines for Improved Unsteady Aerodynamic Performance
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.; Rai, Man Mohan; Huber, Frank W.
1998-01-01
A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology (RSM) and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The optimization procedure yields a modified design that improves the aerodynamic performance through small changes to the reference design geometry. The computed results demonstrate the capabilities of the neural net-based design procedure, and also show the tremendous advantages that can be gained by including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.
Xie, Rui; Wan, Xianrong; Hong, Sheng; Yi, Jianxin
2017-06-14
The performance of a passive radar network can be greatly improved by an optimal radar network structure. Generally, radar network structure optimization consists of two aspects, namely the placement of receivers in suitable places and selection of appropriate illuminators. The present study investigates issues concerning the joint optimization of receiver placement and illuminator selection for a passive radar network. Firstly, the required radar cross section (RCS) for target detection is chosen as the performance metric, and the joint optimization model boils down to the partition p -center problem (PPCP). The PPCP is then solved by a proposed bisection algorithm. The key of the bisection algorithm lies in solving the partition set covering problem (PSCP), which can be solved by a hybrid algorithm developed by coupling the convex optimization with the greedy dropping algorithm. In the end, the performance of the proposed algorithm is validated via numerical simulations.
Current Perspectives on Profiling and Enhancing Wheelchair Court Sport Performance.
Paulson, Thomas; Goosey-Tolfrey, Victoria
2017-03-01
Despite the growing interest in Paralympic sport, the evidence base for supporting elite wheelchair sport performance remains in its infancy when compared with able-bodied (AB) sport. Subsequently, current practice is often based on theory adapted from AB guidelines, with a heavy reliance on anecdotal evidence and practitioner experience. Many principles in training prescription and performance monitoring with wheelchair athletes are directly transferable from AB practice, including the periodization and tapering of athlete loads around competition, yet considerations for the physiological consequences of an athlete's impairment and the interface between athlete and equipment are vital when targeting interventions to optimize in-competition performance. Researchers and practitioners are faced with the challenge of identifying and implementing reliable protocols that detect small but meaningful changes in impairment-specific physical capacities and on-court performance. Technologies to profile both linear and rotational on-court performance are an essential component of sport-science support to understand sport-specific movement profiles and prescribe training intensities. In addition, an individualized approach to the prescription of athlete training and optimization of the "wheelchair-user interface" is required, accounting for an athlete's anthropometrics, sports classification, and positional role on court. In addition to enhancing physical capacities, interventions must focus on the integration of the athlete and his or her equipment, as well as techniques for limiting environmental influence on performance. Taken together, the optimization of wheelchair sport performance requires a multidisciplinary approach based on the individual requirements of each athlete.
Optimization Model for Web Based Multimodal Interactive Simulations.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-07-15
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.
Optimization Model for Web Based Multimodal Interactive Simulations
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-01-01
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713
Optimal Time-Resource Allocation for Energy-Efficient Physical Activity Detection
Thatte, Gautam; Li, Ming; Lee, Sangwon; Emken, B. Adar; Annavaram, Murali; Narayanan, Shrikanth; Spruijt-Metz, Donna; Mitra, Urbashi
2011-01-01
The optimal allocation of samples for physical activity detection in a wireless body area network for health-monitoring is considered. The number of biometric samples collected at the mobile device fusion center, from both device-internal and external Bluetooth heterogeneous sensors, is optimized to minimize the transmission power for a fixed number of samples, and to meet a performance requirement defined using the probability of misclassification between multiple hypotheses. A filter-based feature selection method determines an optimal feature set for classification, and a correlated Gaussian model is considered. Using experimental data from overweight adolescent subjects, it is found that allocating a greater proportion of samples to sensors which better discriminate between certain activity levels can result in either a lower probability of error or energy-savings ranging from 18% to 22%, in comparison to equal allocation of samples. The current activity of the subjects and the performance requirements do not significantly affect the optimal allocation, but employing personalized models results in improved energy-efficiency. As the number of samples is an integer, an exhaustive search to determine the optimal allocation is typical, but computationally expensive. To this end, an alternate, continuous-valued vector optimization is derived which yields approximately optimal allocations and can be implemented on the mobile fusion center due to its significantly lower complexity. PMID:21796237
NASA Astrophysics Data System (ADS)
Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar
2014-03-01
The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.
Optimal design of structures for earthquake loads by a hybrid RBF-BPSO method
NASA Astrophysics Data System (ADS)
Salajegheh, Eysa; Gholizadeh, Saeed; Khatibinia, Mohsen
2008-03-01
The optimal seismic design of structures requires that time history analyses (THA) be carried out repeatedly. This makes the optimal design process inefficient, in particular, if an evolutionary algorithm is used. To reduce the overall time required for structural optimization, two artificial intelligence strategies are employed. In the first strategy, radial basis function (RBF) neural networks are used to predict the time history responses of structures in the optimization flow. In the second strategy, a binary particle swarm optimization (BPSO) is used to find the optimum design. Combining the RBF and BPSO, a hybrid RBF-BPSO optimization method is proposed in this paper, which achieves fast optimization with high computational performance. Two examples are presented and compared to determine the optimal weight of structures under earthquake loadings using both exact and approximate analyses. The numerical results demonstrate the computational advantages and effectiveness of the proposed hybrid RBF-BPSO optimization method for the seismic design of structures.
Tuning HDF5 for Lustre File Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howison, Mark; Koziol, Quincey; Knaak, David
2010-09-24
HDF5 is a cross-platform parallel I/O library that is used by a wide variety of HPC applications for the flexibility of its hierarchical object-database representation of scientific data. We describe our recent work to optimize the performance of the HDF5 and MPI-IO libraries for the Lustre parallel file system. We selected three different HPC applications to represent the diverse range of I/O requirements, and measured their performance on three different systems to demonstrate the robustness of our optimizations across different file system configurations and to validate our optimization strategy. We demonstrate that the combined optimizations improve HDF5 parallel I/O performancemore » by up to 33 times in some cases running close to the achievable peak performance of the underlying file system and demonstrate scalable performance up to 40,960-way concurrency.« less
Use of mathematical decomposition to optimize investments in gas production and distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dougherty, E.L.; Lombardino, E.; Hutchinson, P.
1986-01-01
This paper presents an analytical approach based upon the decomposition method of mathematical programming for determining the optimal investment sequence in each year of a planning horizon for a group of reservoirs that produce gas and gas liquids through a trunk-line network and a gas processing plant. The paper describes the development of the simulation and investment planning system (SIPS) to perform the required calculations. Net present value (NPV) is maximized with the requirement that the incremental present value ratio (PWPI) of any investment in any reservoir be greater than a specified minimum value. A unique feature is a gasmore » reservoir simulation model that aids SIPS in evaluating field development investments. The optimal solution supplies specified dry gas offtake requirements through time until the remaining reserves are insufficient to meet requirements economically. The sales value of recovered liquids contributes significantly to NPV, while the required spare gas-producing capacity reduces NPV. Sips was used successfully for 4 years to generate annual investment plans and operating budgets, and to perform many special studies for a producing complex containing over 50 reservoirs. This experience is reviewed. In considering this large problem, SIPS converges to the optimal solution in 10 to 20 iterations. The primary factor that determines this number is how good the starting guess is. Although sips can generate a starting guess, beginning with a previous optimal solution ordinarily results in faster convergence. Computing time increases in proportion to the number of reservoirs because more than 90% of computing time is spent solving the, reservoir, subproblems.« less
Optimal control of a variable spin speed CMG system for space vehicles. [Control Moment Gyros
NASA Technical Reports Server (NTRS)
Liu, T. C.; Chubb, W. B.; Seltzer, S. M.; Thompson, Z.
1973-01-01
Many future NASA programs require very high accurate pointing stability. These pointing requirements are well beyond anything attempted to date. This paper suggests a control system which has the capability of meeting these requirements. An optimal control law for the suggested system is specified. However, since no direct method of solution is known for this complicated system, a computation technique using successive approximations is used to develop the required solution. The method of calculus of variations is applied for estimating the changes of index of performance as well as those constraints of inequality of state variables and terminal conditions. Thus, an algorithm is obtained by the steepest descent method and/or conjugate gradient method. Numerical examples are given to show the optimal controls.
Performance Analysis and Design Synthesis (PADS) computer program. Volume 3: User manual
NASA Technical Reports Server (NTRS)
1972-01-01
The two-fold purpose of the Performance Analysis and Design Synthesis (PADS) computer program is discussed. The program can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general purpose branched trajectory optimization program. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent. The second module uses the method of quasi-linearization, which requires a starting solution from the first trajectory module.
Doing our best: optimization and the management of risk.
Ben-Haim, Yakov
2012-08-01
Tools and concepts of optimization are widespread in decision-making, design, and planning. There is a moral imperative to "do our best." Optimization underlies theories in physics and biology, and economic theories often presume that economic agents are optimizers. We argue that in decisions under uncertainty, what should be optimized is robustness rather than performance. We discuss the equity premium puzzle from financial economics, and explain that the puzzle can be resolved by using the strategy of satisficing rather than optimizing. We discuss design of critical technological infrastructure, showing that satisficing of performance requirements--rather than optimizing them--is a preferable design concept. We explore the need for disaster recovery capability and its methodological dilemma. The disparate domains--economics and engineering--illuminate different aspects of the challenge of uncertainty and of the significance of robust-satisficing. © 2012 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Arroyo, Orlando; Gutiérrez, Sergio
2017-07-01
Several seismic optimization methods have been proposed to improve the performance of reinforced concrete framed (RCF) buildings; however, they have not been widely adopted among practising engineers because they require complex nonlinear models and are computationally expensive. This article presents a procedure to improve the seismic performance of RCF buildings based on eigenfrequency optimization, which is effective, simple to implement and efficient. The method is used to optimize a 10-storey regular building, and its effectiveness is demonstrated by nonlinear time history analyses, which show important reductions in storey drifts and lateral displacements compared to a non-optimized building. A second example for an irregular six-storey building demonstrates that the method provides benefits to a wide range of RCF structures and supports the applicability of the proposed method.
Calibration Modeling Methodology to Optimize Performance for Low Range Applications
NASA Technical Reports Server (NTRS)
McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.
2010-01-01
Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.
Closed Loop System Identification with Genetic Algorithms
NASA Technical Reports Server (NTRS)
Whorton, Mark S.
2004-01-01
High performance control design for a flexible space structure is challenging since high fidelity plant models are di.cult to obtain a priori. Uncertainty in the control design models typically require a very robust, low performance control design which must be tuned on-orbit to achieve the required performance. Closed loop system identi.cation is often required to obtain a multivariable open loop plant model based on closed-loop response data. In order to provide an accurate initial plant model to guarantee convergence for standard local optimization methods, this paper presents a global parameter optimization method using genetic algorithms. A minimal representation of the state space dynamics is employed to mitigate the non-uniqueness and over-parameterization of general state space realizations. This control-relevant system identi.cation procedure stresses the joint nature of the system identi.cation and control design problem by seeking to obtain a model that minimizes the di.erence between the predicted and actual closed-loop performance.
Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III
1996-01-01
Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.
Thermal/Structural Tailoring of Engine Blades (T/STAEBL) User's manual
NASA Technical Reports Server (NTRS)
Brown, K. W.
1994-01-01
The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a computer code that is able to perform numerical optimizations of cooled jet engine turbine blades and vanes. These optimizations seek an airfoil design of minimum operating cost that satisfies realistic design constraints. This report documents the organization of the T/STAEBL computer program, its design and analysis procedure, its optimization procedure, and provides an overview of the input required to run the program, as well as the computer resources required for its effective use. Additionally, usage of the program is demonstrated through a validation test case.
Thermal/Structural Tailoring of Engine Blades (T/STAEBL): User's manual
NASA Astrophysics Data System (ADS)
Brown, K. W.
1994-03-01
The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a computer code that is able to perform numerical optimizations of cooled jet engine turbine blades and vanes. These optimizations seek an airfoil design of minimum operating cost that satisfies realistic design constraints. This report documents the organization of the T/STAEBL computer program, its design and analysis procedure, its optimization procedure, and provides an overview of the input required to run the program, as well as the computer resources required for its effective use. Additionally, usage of the program is demonstrated through a validation test case.
Model-Based Thermal System Design Optimization for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-01-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Model-based thermal system design optimization for the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-10-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.
2015-05-01
Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.
Low cost Ku-band earth terminals for voice/data/facsimile
NASA Technical Reports Server (NTRS)
Kelley, R. L.
1977-01-01
A Ku-band satellite earth terminal capable of providing two way voice/facsimile teleconferencing, 128 Kbps data, telephone, and high-speed imagery services is proposed. Optimized terminal cost and configuration are presented as a function of FDMA and TDMA approaches to multiple access. The entire terminal from the antenna to microphones, speakers and facsimile equipment is considered. Component cost versus performance has been projected as a function of size of the procurement and predicted hardware innovations and production techniques through 1985. The lowest cost combinations of components has been determined in a computer optimization algorithm. The system requirements including terminal EIRP and G/T, satellite size, power per spacecraft transponder, satellite antenna characteristics, and link propagation outage were selected using a computerized system cost/performance optimization algorithm. System cost and terminal cost and performance requirements are presented as a function of the size of a nationwide U.S. network. Service costs are compared with typical conference travel costs to show the viability of the proposed terminal.
A Neuroscience Approach to Optimizing Brain Resources for Human Performance in Extreme Environments
Paulus, Martin P.; Potterat, Eric G.; Taylor, Marcus K.; Van Orden, Karl F.; Bauman, James; Momen, Nausheen; Padilla, Genieleah A.; Swain, Judith L.
2009-01-01
Extreme environments requiring optimal cognitive and behavioral performance occur in a wide variety of situations ranging from complex combat operations to elite athletic competitions. Although a large literature characterizes psychological and other aspects of individual differences in performances in extreme environments, virtually nothing is known about the underlying neural basis for these differences. This review summarizes the cognitive, emotional, and behavioral consequences of exposure to extreme environments, discusses predictors of performance, and builds a case for the use of neuroscience approaches to quantify and understand optimal cognitive and behavioral performance. Extreme environments are defined as an external context that exposes individuals to demanding psychological and/or physical conditions, and which may have profound effects on cognitive and behavioral performance. Examples of these types of environments include combat situations, Olympic-level competition, and expeditions in extreme cold, at high altitudes, or in space. Optimal performance is defined as the degree to which individuals achieve a desired outcome when completing goal-oriented tasks. It is hypothesized that individual variability with respect to optimal performance in extreme environments depends on a well “contextualized” internal body state that is associated with an appropriate potential to act. This hypothesis can be translated into an experimental approach that may be useful for quantifying the degree to which individuals are particularly suited to performing optimally in demanding environments. PMID:19447132
The art and science of missile defense sensor design
NASA Astrophysics Data System (ADS)
McComas, Brian K.
2014-06-01
A Missile Defense Sensor is a complex optical system, which sits idle for long periods of time, must work with little or no on-board calibration, be used to find and discriminate targets, and guide the kinetic warhead to the target within minutes of launch. A short overview of the Missile Defense problem will be discussed here, as well as, the top-level performance drivers, like Noise Equivalent Irradiance (NEI), Acquisition Range, and Dynamic Range. These top-level parameters influence the choice of optical system, mechanical system, focal plane array (FPA), Read Out Integrated Circuit (ROIC), and cryogenic system. This paper will not only discuss the physics behind the performance of the sensor, but it will also discuss the "art" of optimizing the performance of the sensor given the top level performance parameters. Balancing the sensor sub-systems is key to the sensor's performance in these highly stressful missions. Top-level performance requirements impact the choice of lower level hardware and requirements. The flow down of requirements to the lower level hardware will be discussed. This flow down directly impacts the FPA, where careful selection of the detector is required. The flow down also influences the ROIC and cooling requirements. The key physics behind the detector and cryogenic system interactions will be discussed, along with the balancing of subsystem performance. Finally, the overall system balance and optimization will be discussed in the context of missile defense sensors and expected performance of the overall kinetic warhead.
NASA Technical Reports Server (NTRS)
Pilkey, W. D.; Wang, B. P.; Yoo, Y.; Clark, B.
1973-01-01
A description and applications of a computer capability for determining the ultimate optimal behavior of a dynamically loaded structural-mechanical system are presented. This capability provides characteristics of the theoretically best, or limiting, design concept according to response criteria dictated by design requirements. Equations of motion of the system in first or second order form include incompletely specified elements whose characteristics are determined in the optimization of one or more performance indices subject to the response criteria in the form of constraints. The system is subject to deterministic transient inputs, and the computer capability is designed to operate with a large linear programming on-the-shelf software package which performs the desired optimization. The report contains user-oriented program documentation in engineering, problem-oriented form. Applications cover a wide variety of dynamics problems including those associated with such diverse configurations as a missile-silo system, impacting freight cars, and an aircraft ride control system.
Capsule Performance Optimization in the National Ignition Campaign
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landen, O L; MacGowan, B J; Haan, S W
2009-10-13
A capsule performance optimization campaign will be conducted at the National Ignition Facility to substantially increase the probability of ignition. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting themore » key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.« less
Capsule performance optimization in the national ignition campaign
NASA Astrophysics Data System (ADS)
Landen, O. L.; MacGowan, B. J.; Haan, S. W.; Edwards, J.
2010-08-01
A capsule performance optimization campaign will be conducted at the National Ignition Facility [1] to substantially increase the probability of ignition. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.
Thermofluid Analysis of Magnetocaloric Refrigeration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdelaziz, Omar; Gluesenkamp, Kyle R; Vineyard, Edward Allan
While there have been extensive studies on thermofluid characteristics of different magnetocaloric refrigeration systems, a conclusive optimization study using non-dimensional parameters which can be applied to a generic system has not been reported yet. In this study, a numerical model has been developed for optimization of active magnetic refrigerator (AMR). This model is computationally efficient and robust, making it appropriate for running the thousands of simulations required for parametric study and optimization. The governing equations have been non-dimensionalized and numerically solved using finite difference method. A parametric study on a wide range of non-dimensional numbers has been performed. While themore » goal of AMR systems is to improve the performance of competitive parameters including COP, cooling capacity and temperature span, new parameters called AMR performance index-1 have been introduced in order to perform multi objective optimization and simultaneously exploit all these parameters. The multi-objective optimization is carried out for a wide range of the non-dimensional parameters. The results of this study will provide general guidelines for designing high performance AMR systems.« less
A Comparison of Metallic, Composite and Nanocomposite Optimal Transonic Transport Wings
NASA Technical Reports Server (NTRS)
Kennedy, Graeme J.; Kenway, Gaetan K. W.; Martins, Joaquim R. R.
2014-01-01
Current and future composite material technologies have the potential to greatly improve the performance of large transport aircraft. However, the coupling between aerodynamics and structures makes it challenging to design optimal flexible wings, and the transonic flight regime requires high fidelity computational models. We address these challenges by solving a series of high-fidelity aerostructural optimization problems that explore the design space for the wing of a large transport aircraft. We consider three different materials: aluminum, carbon-fiber reinforced composites and an hypothetical composite based on carbon nanotubes. The design variables consist of both aerodynamic shape (including span), structural sizing, and ply angle fractions in the case of composites. Pareto fronts with respect to structural weight and fuel burn are generated. The wing performance in each case is optimized subject to stress and buckling constraints. We found that composite wings consistently resulted in lower fuel burn and lower structural weight, and that the carbon nanotube composite did not yield the increase in performance one would expect from a material with such outstanding properties. This indicates that there might be diminishing returns when it comes to the application of advanced materials to wing design, requiring further investigation.
NASA Technical Reports Server (NTRS)
Chappell, Steven P.; Norcross, Jason R.; Gernhardt, Michael L.
2009-01-01
NASA's Constellation Program has plans to return to the Moon within the next 10 years. Although reaching the Moon during the Apollo Program was a remarkable human engineering achievement, fewer than 20 extravehicular activities (EVAs) were performed. Current projections indicate that the next lunar exploration program will require thousands of EVAs, which will require spacesuits that are better optimized for human performance. Limited mobility and dexterity, and the position of the center of gravity (CG) are a few of many features of the Apollo suit that required significant crew compensation to accomplish the objectives. Development of a new EVA suit system will ideally result in performance close to or better than that in shirtsleeves at 1 G, i.e., in "a suit that is a pleasure to work in, one that you would want to go out and explore in on your day off." Unlike the Shuttle program, in which only a fraction of the crew perform EVA, the Constellation program will require that all crewmembers be able to perform EVA. As a result, suits must be built to accommodate and optimize performance for a larger range of crew anthropometry, strength, and endurance. To address these concerns, NASA has begun a series of tests to better understand the factors affecting human performance and how to utilize various lunar gravity simulation environments available for testing.
van Rossum, Huub H; Kemperman, Hans
2017-02-01
To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.
Goldberg, Daniel N.; Narayanan, Sri Hari Krishna; Hascoet, Laurent; ...
2016-05-20
We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enablingmore » larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. Finally, the methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.« less
NASA Technical Reports Server (NTRS)
1972-01-01
The Performance Analysis and Design Synthesis (PADS) computer program has a two-fold purpose. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module. For Volume 1 see N73-13199.
Physical insight into the simultaneous optimization of structure and control
NASA Technical Reports Server (NTRS)
Jacques, Robert N.; Miller, David W.
1993-01-01
Recent trends in spacecraft design which yield larger structures with more stringent performance requirements place many flexible modes of the structure within the bandwidth of active controllers. The resulting complications to the spacecraft design make it highly desirable to understand the impact of structural changes on an optimally controlled structure. This work uses low structural models with optimal H(sub 2) and H(sub infinity) controllers to develop some basic insight into this problem. This insight concentrates on several basic approaches to improving controlled performance and how these approaches interact in determining the optimal designs. A numerical example is presented to demonstrate how this insight can be generalized to more complex problems.
Kharmanda, G
2016-11-01
A new strategy of multi-objective structural optimization is integrated into Austin-Moore prosthesis in order to improve its performance. The new resulting model is so-called Improved Austin-Moore. The topology optimization is considered as a conceptual design stage to sketch several kinds of hollow stems according to the daily loading cases. The shape optimization presents the detailed design stage considering several objectives. Here, A new multiplicative formulation is proposed as a performance scale in order to define the best compromise between several requirements. Numerical applications on 2D and 3D problems are carried out to show the advantages of the proposed model.
Optimizing Aesthetic Outcomes in Delayed Breast Reconstruction
2017-01-01
Background: The need to restore both the missing breast volume and breast surface area makes achieving excellent aesthetic outcomes in delayed breast reconstruction especially challenging. Autologous breast reconstruction can be used to achieve both goals. The aim of this study was to identify surgical maneuvers that can optimize aesthetic outcomes in delayed breast reconstruction. Methods: This is a retrospective review of operative and clinical records of all patients who underwent unilateral or bilateral delayed breast reconstruction with autologous tissue between April 2014 and January 2017. Three groups of delayed breast reconstruction patients were identified based on patient characteristics. Results: A total of 26 flaps were successfully performed in 17 patients. Key surgical maneuvers for achieving aesthetically optimal results were identified. A statistically significant difference for volume requirements was identified in cases where a delayed breast reconstruction and a contralateral immediate breast reconstruction were performed simultaneously. Conclusions: Optimal aesthetic results can be achieved with: (1) restoration of breast skin envelope with tissue expansion when possible, (2) optimal positioning of a small skin paddle to be later incorporated entirely into a nipple areola reconstruction when adequate breast skin surface area is present, (3) limiting the reconstructed breast mound to 2 skin tones when large area skin resurfacing is required, (4) increasing breast volume by deepithelializing, not discarding, the inferior mastectomy flap skin, (5) eccentric division of abdominal flaps when an immediate and delayed bilateral breast reconstructions are performed simultaneously; and (6) performing second-stage breast reconstruction revisions and fat grafting. PMID:28894666
Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.
Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O
2016-03-01
An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.
NASA Astrophysics Data System (ADS)
Aittokoski, Timo; Miettinen, Kaisa
2008-07-01
Solving real-life engineering problems can be difficult because they often have multiple conflicting objectives, the objective functions involved are highly nonlinear and they contain multiple local minima. Furthermore, function values are often produced via a time-consuming simulation process. These facts suggest the need for an automated optimization tool that is efficient (in terms of number of objective function evaluations) and capable of solving global and multiobjective optimization problems. In this article, the requirements on a general simulation-based optimization system are discussed and such a system is applied to optimize the performance of a two-stroke combustion engine. In the example of a simulation-based optimization problem, the dimensions and shape of the exhaust pipe of a two-stroke engine are altered, and values of three conflicting objective functions are optimized. These values are derived from power output characteristics of the engine. The optimization approach involves interactive multiobjective optimization and provides a convenient tool to balance between conflicting objectives and to find good solutions.
A Framework for Robust Multivariable Optimization of Integrated Circuits in Space Applications
NASA Technical Reports Server (NTRS)
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application Specific Integrated Circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way which facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as framework of software modules, templates and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation. Templates provide a starting point for both while toolbox functions minimize the code required. Once a test bench has been coded to optimize a particular circuit, it is also used to verify the final design. The combination of test bench and cost function can then serve as a template for similar circuits or be re-used to migrate the design to different processes by re-running it with the new process specific device models. The system has been used in the design of time to digital converters for laser ranging and time-of-flight mass spectrometry to optimize analog, mixed signal and digital circuits such as charge sensitive amplifiers, comparators, delay elements, radiation tolerant dual interlocked (DICE) flip-flops and two of three voter gates.
NASA Astrophysics Data System (ADS)
Welch, Kevin; Leonard, Jerry; Jones, Richard D.
2010-08-01
Increasingly stringent requirements on the performance of diffractive optical elements (DOEs) used in wafer scanner illumination systems are driving continuous improvements in their associated manufacturing processes. Specifically, these processes are designed to improve the output pattern uniformity of off-axis illumination systems to minimize degradation in the ultimate imaging performance of a lithographic tool. In this paper, we discuss performance improvements in both photolithographic patterning and RIE etching of fused silica diffractive optical structures. In summary, optimized photolithographic processes were developed to increase critical dimension uniformity and featuresize linearity across the substrate. The photoresist film thickness was also optimized for integration with an improved etch process. This etch process was itself optimized for pattern transfer fidelity, sidewall profile (wall angle, trench bottom flatness), and across-wafer etch depth uniformity. Improvements observed with these processes on idealized test structures (for ease of analysis) led to their implementation in product flows, with comparable increases in performance and yield on customer designs.
Novel operation and control of an electric vehicle aluminum/air battery system
NASA Astrophysics Data System (ADS)
Zhang, Xin; Yang, Shao Hua; Knickle, Harold
The objective of this paper is to create a method to size battery subsystems for an electric vehicle to optimize battery performance. Optimization of performance includes minimizing corrosion by operating at a constant current density. These subsystems will allow for easy mechanical recharging. A proper choice of battery subsystem will allow for longer battery life, greater range and performance. For longer life, the current density and reaction rate should be nearly constant. The control method requires control of power by controlling electrolyte flow in battery sub modules. As power is increased more sub modules come on line and more electrolyte is needed. Solenoid valves open in a sequence to provide the required power. Corrosion is limited because there is no electrolyte in the modules not being used.
DOT National Transportation Integrated Search
2012-11-01
The purpose of this project is to develop for the Intelligent Network Flow Optimization (INFLO), which is one collection (or bundle) of high-priority transformative applications identified by the United States Department of Transportation (USDOT) Mob...
Optimal design of a main driving mechanism for servo punch press based on performance atlases
NASA Astrophysics Data System (ADS)
Zhou, Yanhua; Xie, Fugui; Liu, Xinjun
2013-09-01
The servomotor drive turret punch press is attracting more attentions and being developed more intensively due to the advantages of high speed, high accuracy, high flexibility, high productivity, low noise, cleaning and energy saving. To effectively improve the performance and lower the cost, it is necessary to develop new mechanisms and establish corresponding optimal design method with uniform performance indices. A new patented main driving mechanism and a new optimal design method are proposed. In the optimal design, the performance indices, i.e., the local motion/force transmission indices ITI, OTI, good transmission workspace good transmission workspace(GTW) and the global transmission indices GTIs are defined. The non-dimensional normalization method is used to get all feasible solutions in dimensional synthesis. Thereafter, the performance atlases, which can present all possible design solutions, are depicted. As a result, the feasible solution of the mechanism with good motion/force transmission performance is obtained. And the solution can be flexibly adjusted by designer according to the practical design requirements. The proposed mechanism is original, and the presented design method provides a feasible solution to the optimal design of the main driving mechanism for servo punch press.
Optimizing cementious content in concrete mixtures for required performance.
DOT National Transportation Integrated Search
2012-01-01
"This research investigated the effects of changing the cementitious content required at a given water-to-cement ratio (w/c) on workability, strength, and durability of a concrete mixture. : An experimental program was conducted in which 64 concrete ...
Luo, Biao; Liu, Derong; Wu, Huai-Ning
2018-06-01
Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice. To overcome this problem, the system transformation is first introduced with the general performance index. Then, the constrained optimal control problem is converted to an unconstrained optimal control problem. By introducing the action-state value function, i.e., Q-function, the VIQL algorithm is proposed to learn the optimal Q-function of the data-based unconstrained optimal control problem. The convergence results of the VIQL algorithm are established with an easy-to-realize initial condition . To implement the VIQL algorithm, the critic-only structure is developed, where only one neural network is required to approximate the Q-function. The converged Q-function obtained from the critic-only VIQL method is employed to design the adaptive constrained optimal controller based on the gradient descent scheme. Finally, the effectiveness of the developed adaptive control method is tested on three examples with computer simulation.
NASA Astrophysics Data System (ADS)
Teves, André da Costa; Lima, Cícero Ribeiro de; Passaro, Angelo; Silva, Emílio Carlos Nelli
2017-03-01
Electrostatic or capacitive accelerometers are among the highest volume microelectromechanical systems (MEMS) products nowadays. The design of such devices is a complex task, since they depend on many performance requirements, which are often conflicting. Therefore, optimization techniques are often used in the design stage of these MEMS devices. Because of problems with reliability, the technology of MEMS is not yet well established. Thus, in this work, size optimization is combined with the reliability-based design optimization (RBDO) method to improve the performance of accelerometers. To account for uncertainties in the dimensions and material properties of these devices, the first order reliability method is applied to calculate the probabilities involved in the RBDO formulation. Practical examples of bulk-type capacitive accelerometer designs are presented and discussed to evaluate the potential of the implemented RBDO solver.
Efficiency Management in Spaceflight Systems
NASA Technical Reports Server (NTRS)
Murphy, Karen
2016-01-01
Efficiency in spaceflight is often approached as “faster, better, cheaper – pick two”. The high levels of performance and reliability required for each mission suggest that planners can only control for two of the three. True efficiency comes by optimizing a system across all three parameters. The functional processes of spaceflight become technical requirements on three operational groups during mission planning: payload, vehicle, and launch operations. Given the interrelationships among the functions performed by the operational groups, optimizing function resources from one operational group to the others affects the efficiency of those groups and therefore the mission overall. This paper helps outline this framework and creates a context in which to understand the effects of resource trades on the overall system, improving the efficiency of the operational groups and the mission as a whole. This allows insight into and optimization of the controlling factors earlier in the mission planning stage.
Modeling error analysis of stationary linear discrete-time filters
NASA Technical Reports Server (NTRS)
Patel, R.; Toda, M.
1977-01-01
The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.
Static Memory Deduplication for Performance Optimization in Cloud Computing.
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-04-27
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.
Static Memory Deduplication for Performance Optimization in Cloud Computing
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-01-01
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible. PMID:28448434
Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.
2018-01-01
Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690
Thin client performance for remote 3-D image display.
Lai, Albert; Nieh, Jason; Laine, Andrew; Starren, Justin
2003-01-01
Several trends in biomedical computing are converging in a way that will require new approaches to telehealth image display. Image viewing is becoming an "anytime, anywhere" activity. In addition, organizations are beginning to recognize that healthcare providers are highly mobile and optimal care requires providing information wherever the provider and patient are. Thin-client computing is one way to support image viewing this complex environment. However little is known about the behavior of thin client systems in supporting image transfer in modern heterogeneous networks. Our results show that using thin-clients can deliver acceptable performance over conditions commonly seen in wireless networks if newer protocols optimized for these conditions are used.
Optimal design of gas adsorption refrigerators for cryogenic cooling
NASA Technical Reports Server (NTRS)
Chan, C. K.
1983-01-01
The design of gas adsorption refrigerators used for cryogenic cooling in the temperature range of 4K to 120K was examined. The functional relationships among the power requirement for the refrigerator, the system mass, the cycle time and the operating conditions were derived. It was found that the precool temperature, the temperature dependent heat capacities and thermal conductivities, and pressure and temperature variations in the compressors have important impacts on the cooling performance. Optimal designs based on a minimum power criterion were performed for four different gas adsorption refrigerators and a multistage system. It is concluded that the estimates of the power required and the system mass are within manageable limits in various spacecraft environments.
Process control systems: integrated for future process technologies
NASA Astrophysics Data System (ADS)
Botros, Youssry; Hajj, Hazem M.
2003-06-01
Process Control Systems (PCS) are becoming more crucial to the success of Integrated Circuit makers due to their direct impact on product quality, cost, and Fab output. The primary objective of PCS is to minimize variability by detecting and correcting non optimal performance. Current PCS implementations are considered disparate, where each PCS application is designed, deployed and supported separately. Each implementation targets a specific area of control such as equipment performance, wafer manufacturing, and process health monitoring. With Intel entering the nanometer technology era, tighter process specifications are required for higher yields and lower cost. This requires areas of control to be tightly coupled and integrated to achieve the optimal performance. This requirement can be achieved via consistent design and deployment of the integrated PCS. PCS integration will result in several benefits such as leveraging commonalities, avoiding redundancy, and facilitating sharing between implementations. This paper will address PCS implementations and focus on benefits and requirements of the integrated PCS. Intel integrated PCS Architecture will be then presented and its components will be briefly discussed. Finally, industry direction and efforts to standardize PCS interfaces that enable PCS integration will be presented.
Kafetzoglou, Stella; Aristomenopoulos, Giorgos; Papavassiliou, Symeon
2015-08-11
Among the key aspects of the Internet of Things (IoT) is the integration of heterogeneous sensors in a distributed system that performs actions on the physical world based on environmental information gathered by sensors and application-related constraints and requirements. Numerous applications of Wireless Sensor Networks (WSNs) have appeared in various fields, from environmental monitoring, to tactical fields, and healthcare at home, promising to change our quality of life and facilitating the vision of sensor network enabled smart cities. Given the enormous requirements that emerge in such a setting-both in terms of data and energy-data aggregation appears as a key element in reducing the amount of traffic in wireless sensor networks and achieving energy conservation. Probabilistic frameworks have been introduced as operational efficient and performance effective solutions for data aggregation in distributed sensor networks. In this work, we introduce an overall optimization approach that improves and complements such frameworks towards identifying the optimal probability for a node to aggregate packets as well as the optimal aggregation period that a node should wait for performing aggregation, so as to minimize the overall energy consumption, while satisfying certain imposed delay constraints. Primal dual decomposition is employed to solve the corresponding optimization problem while simulation results demonstrate the operational efficiency of the proposed approach under different traffic and topology scenarios.
Jacobsen, Sonja; Patel, Pranav; Schmidt-Chanasit, Jonas; Leparc-Goffart, Isabelle; Teichmann, Anette; Zeller, Herve; Niedrig, Matthias
2016-03-01
Since the re-emergence of Chikungunya virus (CHIKV) in Reunion in 2005 and the recent outbreak in the Caribbean islands with an expansion to the Americas the CHIK diagnostic became very important. We evaluate the performance of laboratories regarding molecular and serological diagnostic of CHIK worldwide. A panel of 12 samples for molecular and 13 samples for serology were provided to 60 laboratories in 40 countries for evaluating the sensitivity and specificity of molecular and serology testing. The panel for molecular diagnostic testing was analysed by 56 laboratories returning 60 data sets of results whereas the 56 and 60 data sets were returned for IgG and IgM diagnostic from the participating laboratories. Twenty-three from 60 data sets performed optimal, 7 acceptable and 30 sets of results require improvement. From 50 data sets only one laboratory shows an optimal performance for IgM detection, followed by 9 data sets with acceptable and the rest need for improvement. From 46 IgG serology data sets 20 provide an optimal, 2 an acceptable and 24 require improvement performance. The evaluation of some of the diagnostic performances allows linking the quality of results to the in-house methods or commercial assays used. The external quality assurance for CHIK diagnostics provides a good overview on the laboratory performance regarding sensitivity and specificity for the molecular and serology diagnostic required for the quick and reliable analysis of suspected CHIK patients. Nearly half of the laboratories have to improve their diagnostic profile to achieve a better performance. Copyright © 2016 Z. Published by Elsevier B.V. All rights reserved.
Design and analysis of the Collider SPXA/SPRA spool piece vacuum barrier
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cruse, G.; Aksel, G.
1993-04-01
A design for the Collider SPXA/SPRA spool piece vacuum barrier was developed to meet a variety of thermal and structural performance requirements. Both composite and stainless steel alternatives were investigated using detailed finite-element analysis before selecting an optimized version of the ASST SPR spool vacuum barrier design. This design meets the structural requirements and will be able to meet the thermal performance requirements by using some newer thermal strapping configurations.
Towards Robust Designs Via Multiple-Objective Optimization Methods
NASA Technical Reports Server (NTRS)
Man Mohan, Rai
2006-01-01
Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The evolutionary method (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein optimal fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary method is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.
Remmelink, M; Sokolow, Y; Leduc, D
2015-04-01
Histopathology is key to the diagnosis and staging of lung cancer. This analysis requires tissue sampling from primary and/or metastatic lesions. The choice of sampling technique is intended to optimize diagnostic yield while avoiding unnecessarily invasive procedures. Recent developments in targeted therapy require increasingly precise histological and molecular characterization of the tumor. Therefore, pathologists must be economical with tissue samples to ensure that they have the opportunity to perform all the analyses required. More than ever, good communication between clinician, endoscopist or surgeon, and pathologist is essential. This is necessary to ensure that all participants in the process of lung cancer diagnosis collaborate to ensure that the appropriate number and type of biopsies are performed with the appropriate tissue sampling treatment. This will allow performance of all the necessary analyses leading to a more precise characterization of the tumor, and thus the optimal treatment for patients with lung cancer. Copyright © 2015 SPLF. Published by Elsevier Masson SAS. All rights reserved.
Optimal design of a piezoelectric transducer for exciting guided wave ultrasound in rails
NASA Astrophysics Data System (ADS)
Ramatlo, Dineo A.; Wilke, Daniel N.; Loveday, Philip W.
2017-02-01
An existing Ultrasonic Broken Rail Detection System installed in South Africa on a heavy duty railway line is currently being upgraded to include defect detection and location. To accomplish this, an ultrasonic piezoelectric transducer to strongly excite a guided wave mode with energy concentrated in the web (web mode) of a rail is required. A previous study demonstrated that the recently developed SAFE-3D (Semi-Analytical Finite Element - 3 Dimensional) method can effectively predict the guided waves excited by a resonant piezoelectric transducer. In this study, the SAFE-3D model is used in the design optimization of a rail web transducer. A bound-constrained optimization problem was formulated to maximize the energy transmitted by the transducer in the web mode when driven by a pre-defined excitation signal. Dimensions of the transducer components were selected as the three design variables. A Latin hypercube sampled design of experiments that required a total of 500 SAFE-3D analyses in the design space was employed in a response surface-based optimization approach. The Nelder-Mead optimization algorithm was then used to find an optimal transducer design on the constructed response surface. The radial basis function response surface was first verified by comparing a number of predicted responses against the computed SAFE-3D responses. The performance of the optimal transducer predicted by the optimization algorithm on the response surface was also verified to be sufficiently accurate using SAFE-3D. The computational advantages of SAFE-3D in optimal transducer design are noteworthy as more than 500 analyses were performed. The optimal design was then manufactured and experimental measurements were used to validate the predicted performance. The adopted design method has demonstrated the capability to automate the design of transducers for a particular rail cross-section and frequency range.
Optical performance of random anti-reflection structured surfaces (rARSS) on spherical lenses
NASA Astrophysics Data System (ADS)
Taylor, Courtney D.
Random anti-reflection structured surfaces (rARSS) have been reported to improve transmittance of optical-grade fused silica planar substrates to values greater than 99%. These textures are fabricated directly on the substrates using reactive-ion/inductively-coupled plasma etching (RIE/ICP) techniques, and often result in transmitted spectra with no measurable interference effects (fringes) for a wide range of wavelengths. The RIE/ICP processes used in the fabrication process to etch the rARSS is anisotropic and thus well suited for planar components. The improvement in spectral transmission has been found to be independent of optical incidence angles for values from 0° to +/-30°. Qualifying and quantifying the rARSS performance on curved substrates, such as convex lenses, is required to optimize the fabrication of the desired AR effect on optical-power elements. In this work, rARSS was fabricated on fused silica plano-convex (PCX) and plano-concave (PCV) lenses using a planar-substrate optimized RIE process to maximize optical transmission in the range from 500 to 1100 nm. An additional set of lenses were etched in a non-optimized ICP process to provide additional comparisons. Results are presented from optical transmission and beam propagation tests (optimized lenses only) of rARSS lenses for both TE and TM incident polarizations at a wavelength of 633 nm and over a 70° full field of view in both singlet and doublet configurations. These results suggest optimization of the fabrication process is not required, mainly due to the wide angle-of-incidence AR tolerance performance of the rARSS lenses. Non-optimized recipe lenses showed low transmission enhancement, and confirmed the need to optimized etch recipes prior to process transfer of PCX/PCV lenses. Beam propagation tests indicated no major beam degradation through the optimized lens elements. Scanning electron microscopy (SEM) images confirmed different structure between optimized and non-optimized samples. SEM images also indicated isotropically-oriented surface structures on both types of lenses.
In Silico Evaluation of Pharmacokinetic Optimization for Antimitogram-Based Clinical Trials.
Haviari, Skerdi; You, Benoît; Tod, Michel
2018-04-01
Antimitograms are prototype in vitro tests for evaluating chemotherapeutic efficacy using patient-derived primary cancer cells. These tests might help optimize treatment from a pharmacodynamic standpoint by guiding treatment selection. However, they are technically challenging and require refinements and trials to demonstrate benefit to be widely used. In this study, we performed simulations aimed at exploring how to validate antimitograms and how to complement them by pharmacokinetic optimization. A generic model of advanced cancer, including pharmacokinetic-pharmacodynamic monitoring, was used to link dosing schedules with progression-free survival (PFS), as built from previously validated modules. This model was used to explore different possible situations in terms of pharmacokinetic variability, pharmacodynamic variability, and antimitogram performance. The model recapitulated tumor dynamics and standalone therapeutic drug monitoring efficacy consistent with published clinical results. Simulations showed that combining pharmacokinetic and pharmacodynamic optimization should increase PFS in a synergistic fashion. Simulated data were then used to compute required clinical trial sizes, which were 30% to 90% smaller when pharmacokinetic optimization was added to pharmacodynamic optimization. This improvement was observed even when pharmacokinetic optimization alone exhibited only modest benefit. Overall, our work illustrates the synergy derived from combining antimitograms with therapeutic drug monitoring, permitting a disproportionate reduction of the trial size required to prove a benefit on PFS. Accordingly, we suggest that strategies with benefits too small for standalone clinical trials could be validated in combination in a similar manner. Significance: This work offers a method to reduce the number of patients needed for a clinical trial to prove the hypothesized benefit of a drug to progression-free survival, possibly easing opportunities to evaluate combinations. Cancer Res; 78(7); 1873-82. ©2018 AACR . ©2018 American Association for Cancer Research.
Mo, Fuhao; Zhao, Siqi; Yu, Chuanhui; Duan, Shuyong
2018-01-01
The car front bumper system needs to meet the requirements of both pedestrian safety and low-speed impact which are somewhat contradicting. This study aims to design a new kind of modular self-adaptive energy absorber of the front bumper system which can balance the two performances. The X-shaped energy-absorbing structure was proposed which can enhance the energy absorption capacity during impact by changing its deformation mode based on the amount of external collision energy. Then, finite element simulations with a realistic vehicle bumper system are performed to demonstrate its crashworthiness in comparison with the traditional foam energy absorber, which presents a significant improvement of the two performances. Furthermore, the structural parameters of the X-shaped energy-absorbing structure including thickness (t u), side arc radius (R), and clamping boost beam thickness (t b) are analyzed using a full factorial method, and a multiobjective optimization is implemented regarding evaluation indexes of both pedestrian safety and low-speed impact. The optimal parameters are then verified, and the feasibility of the optimal results is confirmed. In conclusion, the new X-shaped energy absorber can meet both pedestrian safety and low-speed impact requirements well by altering the main deformation modes according to different impact energy levels. PMID:29581728
Mo, Fuhao; Zhao, Siqi; Yu, Chuanhui; Xiao, Zhi; Duan, Shuyong
2018-01-01
The car front bumper system needs to meet the requirements of both pedestrian safety and low-speed impact which are somewhat contradicting. This study aims to design a new kind of modular self-adaptive energy absorber of the front bumper system which can balance the two performances. The X-shaped energy-absorbing structure was proposed which can enhance the energy absorption capacity during impact by changing its deformation mode based on the amount of external collision energy. Then, finite element simulations with a realistic vehicle bumper system are performed to demonstrate its crashworthiness in comparison with the traditional foam energy absorber, which presents a significant improvement of the two performances. Furthermore, the structural parameters of the X-shaped energy-absorbing structure including thickness ( t u ), side arc radius ( R ), and clamping boost beam thickness ( t b ) are analyzed using a full factorial method, and a multiobjective optimization is implemented regarding evaluation indexes of both pedestrian safety and low-speed impact. The optimal parameters are then verified, and the feasibility of the optimal results is confirmed. In conclusion, the new X-shaped energy absorber can meet both pedestrian safety and low-speed impact requirements well by altering the main deformation modes according to different impact energy levels.
Efficient QoS-aware Service Composition
NASA Astrophysics Data System (ADS)
Alrifai, Mohammad; Risse, Thomas
Web service composition requests are usually combined with endto-end QoS requirements, which are specified in terms of non-functional properties (e.g. response time, throughput and price). The goal of QoS-aware service composition is to find the best combination of services such that their aggregated QoS values meet these end-to-end requirements. Local selection techniques are very efficient but fail short in handling global QoS constraints. Global optimization techniques, on the other hand, can handle global constraints, but their poor performance render them inappropriate for applications with dynamic and real-time requirements. In this paper we address this problem and propose a solution that combines global optimization with local selection techniques for achieving a better performance. The proposed solution consists of two steps: first we use mixed integer linear programming (MILP) to find the optimal decomposition of global QoS constraints into local constraints. Second, we use local search to find the best web services that satisfy these local constraints. Unlike existing MILP-based global planning solutions, the size of the MILP model in our case is much smaller and independent on the number of available services, yields faster computation and more scalability. Preliminary experiments have been conducted to evaluate the performance of the proposed solution.
Optimizing Multiple QoS for Workflow Applications using PSO and Min-Max Strategy
NASA Astrophysics Data System (ADS)
Umar Ambursa, Faruku; Latip, Rohaya; Abdullah, Azizol; Subramaniam, Shamala
2017-08-01
Workflow scheduling under multiple QoS constraints is a complicated optimization problem. Metaheuristic techniques are excellent approaches used in dealing with such problem. Many metaheuristic based algorithms have been proposed, that considers various economic and trustworthy QoS dimensions. However, most of these approaches lead to high violation of user-defined QoS requirements in tight situation. Recently, a new Particle Swarm Optimization (PSO)-based QoS-aware workflow scheduling strategy (LAPSO) is proposed to improve performance in such situations. LAPSO algorithm is designed based on synergy between a violation handling method and a hybrid of PSO and min-max heuristic. Simulation results showed a great potential of LAPSO algorithm to handling user requirements even in tight situations. In this paper, the performance of the algorithm is anlysed further. Specifically, the impact of the min-max strategy on the performance of the algorithm is revealed. This is achieved by removing the violation handling from the operation of the algorithm. The results show that LAPSO based on only the min-max method still outperforms the benchmark, even though the LAPSO with the violation handling performs more significantly better.
Improving the Unsteady Aerodynamic Performance of Transonic Turbines using Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan; Madavan, Nateri K.; Huber, Frank W.
1999-01-01
A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The procedure yielded a modified design that improves the aerodynamic performance through small changes to the reference design geometry. These results demonstrate the capabilities of the neural net-based design procedure, and also show the advantages of including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.
NASA Astrophysics Data System (ADS)
Heitzman, Nicholas
There are significant fuel consumption consequences for non-optimal flight operations. This study is intended to analyze and highlight areas of interest that affect fuel consumption in typical flight operations. By gathering information from actual flight operators (pilots, dispatch, performance engineers, and air traffic controllers), real performance issues can be addressed and analyzed. A series of interviews were performed with various individuals in the industry and organizations. The wide range of insight directed this study to focus on FAA regulations, airline policy, the ATC system, weather, and flight planning. The goal is to highlight where operational performance differs from design intent in order to better connect optimization with actual flight operations. After further investigation and consensus from the experienced participants, the FAA regulations do not need any serious attention until newer technologies and capabilities are implemented. The ATC system is severely out of date and is one of the largest limiting factors in current flight operations. Although participants are pessimistic about its timely implementation, the FAA's NextGen program for a future National Airspace System should help improve the efficiency of flight operations. This includes situational awareness, weather monitoring, communication, information management, optimized routing, and cleaner flight profiles like Required Navigation Performance (RNP) and Continuous Descent Approach (CDA). Working off the interview results, trade-studies were performed using an in-house flight profile simulation of a Boeing 737-300, integrating NASA legacy codes EDET and NPSS with a custom written mission performance and point-performance "Skymap" calculator. From these trade-studies, it was found that certain flight conditions affect flight operations more than others. With weather, traffic, and unforeseeable risks, flight planning is still limited by its high level of precaution. From this study, it is recommended that air carriers increase focus on defining policies like load scheduling, CG management, reduction in zero fuel weight, inclusion of performance measurement systems, and adapting to the regulations to best optimize the spirit of the requirement.. As well, air carriers should create a larger drive to implement the FAA's NextGen system and move the industry into the future.
Design and multiphysics analysis of a 176Â MHz continuous-wave radio-frequency quadrupole
NASA Astrophysics Data System (ADS)
Kutsaev, S. V.; Mustapha, B.; Ostroumov, P. N.; Barcikowski, A.; Schrage, D.; Rodnizki, J.; Berkovits, D.
2014-07-01
We have developed a new design for a 176 MHz cw radio-frequency quadrupole (RFQ) for the SARAF upgrade project. At this frequency, the proposed design is a conventional four-vane structure. The main design goals are to provide the highest possible shunt impedance while limiting the required rf power to about 120 kW for reliable cw operation, and the length to about 4 meters. If built as designed, the proposed RFQ will be the first four-vane cw RFQ built as a single cavity (no resonant coupling required) that does not require π-mode stabilizing loops or dipole rods. For this, we rely on very detailed 3D simulations of all aspects of the structure and the level of machining precision achieved on the recently developed ATLAS upgrade RFQ. A full 3D model of the structure including vane modulation was developed. The design was optimized using electromagnetic and multiphysics simulations. Following the choice of the vane type and geometry, the vane undercuts were optimized to produce a flat field along the structure. The final design has good mode separation and should not need dipole rods if built as designed, but their effect was studied in the case of manufacturing errors. The tuners were also designed and optimized to tune the main mode without affecting the field flatness. Following the electromagnetic (EM) design optimization, a multiphysics engineering analysis of the structure was performed. The multiphysics analysis is a coupled electromagnetic, thermal and mechanical analysis. The cooling channels, including their paths and sizes, were optimized based on the limiting temperature and deformation requirements. The frequency sensitivity to the RFQ body and vane cooling water temperatures was carefully studied in order to use it for frequency fine-tuning. Finally, an inductive rf power coupler design based on the ATLAS RFQ coupler was developed and simulated. The EM design optimization was performed using cst Microwave Studio and the results were verified using both hfss and ansys. The engineering analysis was performed using hfss and ansys and most of the results were verified using the newly developed cst Multiphysics package.
A globally optimal k-anonymity method for the de-identification of health data.
El Emam, Khaled; Dankar, Fida Kamal; Issa, Romeo; Jonker, Elizabeth; Amyot, Daniel; Cogo, Elise; Corriveau, Jean-Pierre; Walker, Mark; Chowdhury, Sadrul; Vaillancourt, Regis; Roffey, Tyson; Bottomley, Jim
2009-01-01
Explicit patient consent requirements in privacy laws can have a negative impact on health research, leading to selection bias and reduced recruitment. Often legislative requirements to obtain consent are waived if the information collected or disclosed is de-identified. The authors developed and empirically evaluated a new globally optimal de-identification algorithm that satisfies the k-anonymity criterion and that is suitable for health datasets. Authors compared OLA (Optimal Lattice Anonymization) empirically to three existing k-anonymity algorithms, Datafly, Samarati, and Incognito, on six public, hospital, and registry datasets for different values of k and suppression limits. Measurement Three information loss metrics were used for the comparison: precision, discernability metric, and non-uniform entropy. Each algorithm's performance speed was also evaluated. The Datafly and Samarati algorithms had higher information loss than OLA and Incognito; OLA was consistently faster than Incognito in finding the globally optimal de-identification solution. For the de-identification of health datasets, OLA is an improvement on existing k-anonymity algorithms in terms of information loss and performance.
Shape Optimization by Bayesian-Validated Computer-Simulation Surrogates
NASA Technical Reports Server (NTRS)
Patera, Anthony T.
1997-01-01
A nonparametric-validated, surrogate approach to optimization has been applied to the computational optimization of eddy-promoter heat exchangers and to the experimental optimization of a multielement airfoil. In addition to the baseline surrogate framework, a surrogate-Pareto framework has been applied to the two-criteria, eddy-promoter design problem. The Pareto analysis improves the predictability of the surrogate results, preserves generality, and provides a means to rapidly determine design trade-offs. Significant contributions have been made in the geometric description used for the eddy-promoter inclusions as well as to the surrogate framework itself. A level-set based, geometric description has been developed to define the shape of the eddy-promoter inclusions. The level-set technique allows for topology changes (from single-body,eddy-promoter configurations to two-body configurations) without requiring any additional logic. The continuity of the output responses for input variations that cross the boundary between topologies has been demonstrated. Input-output continuity is required for the straightforward application of surrogate techniques in which simplified, interpolative models are fitted through a construction set of data. The surrogate framework developed previously has been extended in a number of ways. First, the formulation for a general, two-output, two-performance metric problem is presented. Surrogates are constructed and validated for the outputs. The performance metrics can be functions of both outputs, as well as explicitly of the inputs, and serve to characterize the design preferences. By segregating the outputs and the performance metrics, an additional level of flexibility is provided to the designer. The validated outputs can be used in future design studies and the error estimates provided by the output validation step still apply, and require no additional appeals to the expensive analysis. Second, a candidate-based a posteriori error analysis capability has been developed which provides probabilistic error estimates on the true performance for a design randomly selected near the surrogate-predicted optimal design.
Optimizing the fine lock performance of the Hubble Space Telescope fine guidance sensors
NASA Technical Reports Server (NTRS)
Eaton, David J.; Whittlesey, Richard; Abramowicz-Reed, Linda; Zarba, Robert
1993-01-01
This paper summarizes the on-orbit performance to date of the three Hubble Space Telescope Fine Guidance Sensors (FGS's) in Fine Lock mode, with respect to acquisition success rate, ability to maintain lock, and star brightness range. The process of optimizing Fine Lock performance, including the reasoning underlying the adjustment of uplink parameters, and the effects of optimization are described. The Fine Lock optimization process has combined theoretical and experimental approaches. Computer models of the FGS have improved understanding of the effects of uplink parameters and fine error averaging on the ability of the FGS to acquire stars and maintain lock. Empirical data have determined the variation of the interferometric error characteristics (so-called 's-curves') between FGS's and over each FGS field of view, identified binary stars, and quantified the systematic error in Coarse Track (the mode preceding Fine Lock). On the basis of these empirical data, the values of the uplink parameters can be selected more precisely. Since launch, optimization efforts have improved FGS Fine Lock performance, particularly acquisition, which now enjoys a nearly 100 percent success rate. More recent work has been directed towards improving FGS tolerance of two conditions that exceed its original design requirements. First, large amplitude spacecraft jitter is induced by solar panel vibrations following day/night transitions. This jitter is generally much greater than the FGS's were designed to track, and while the tracking ability of the FGS's has been shown to exceed design requirements, losses of Fine Lock after day/night transitions are frequent. Computer simulations have demonstrated a potential improvement in Fine Lock tracking of vehicle jitter near terminator crossings. Second, telescope spherical aberration degrades the interferometric error signal in Fine Lock, but use of the FGS two-thirds aperture stop restores the transfer function with a corresponding loss of throughput. This loss requires the minimum brightness of acquired stars to be about one magnitude brighter than originally planned.
Robust Multivariable Optimization and Performance Simulation for ASIC Design
NASA Technical Reports Server (NTRS)
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application-specific-integrated-circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power, and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem, which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques, which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable, are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way that facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as a framework of software modules, templates, and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation.
NASA Technical Reports Server (NTRS)
Kofal, Allen E.
1987-01-01
The mission and system requirements for the concept definition and system analysis of the Orbital Transfer Vehicle (OTV) are established. The requirements set forth constitute the single authority for the selection, evaluation, and optimization of the technical performance and design of the OTV. This requirements document forms the basis for the Ground and Space Based OTV concept definition analyses and establishes the physical, functional, performance and design relationships to STS, Space Station, Orbital Maneuvering Vehicle (OMV), and payloads.
The Army Study Program Fiscal Year 1993 Report
1992-11-16
results of the PERFORMER: CAA Ardennes campaign and, if necessary, to recommend modifications to CEM. PROJECT TITLE: Economic Analysis Of HODA Automation...DCSOPS PERFORMER: CAA PROJECT TITLE: Wartime Requirements, FY 99 PUIC: CSCAMNO15 To assist HODA in determining conventional munition requirements...STUDY WILL ATTEMPT TO DEVELOP A MULTIPLE CRITERIA OPTIMIZATION MODEL DTIC NUMBER: TO AID IN THE PROGRAMMING OF ARMY ACQUISITION FUNDS AT HODA . THE
An improved genetic algorithm for designing optimal temporal patterns of neural stimulation
NASA Astrophysics Data System (ADS)
Cassar, Isaac R.; Titus, Nathan D.; Grill, Warren M.
2017-12-01
Objective. Electrical neuromodulation therapies typically apply constant frequency stimulation, but non-regular temporal patterns of stimulation may be more effective and more efficient. However, the design space for temporal patterns is exceedingly large, and model-based optimization is required for pattern design. We designed and implemented a modified genetic algorithm (GA) intended for design optimal temporal patterns of electrical neuromodulation. Approach. We tested and modified standard GA methods for application to designing temporal patterns of neural stimulation. We evaluated each modification individually and all modifications collectively by comparing performance to the standard GA across three test functions and two biophysically-based models of neural stimulation. Main results. The proposed modifications of the GA significantly improved performance across the test functions and performed best when all were used collectively. The standard GA found patterns that outperformed fixed-frequency, clinically-standard patterns in biophysically-based models of neural stimulation, but the modified GA, in many fewer iterations, consistently converged to higher-scoring, non-regular patterns of stimulation. Significance. The proposed improvements to standard GA methodology reduced the number of iterations required for convergence and identified superior solutions.
Error-based analysis of optimal tuning functions explains phenomena observed in sensory neurons.
Yaeli, Steve; Meir, Ron
2010-01-01
Biological systems display impressive capabilities in effectively responding to environmental signals in real time. There is increasing evidence that organisms may indeed be employing near optimal Bayesian calculations in their decision-making. An intriguing question relates to the properties of optimal encoding methods, namely determining the properties of neural populations in sensory layers that optimize performance, subject to physiological constraints. Within an ecological theory of neural encoding/decoding, we show that optimal Bayesian performance requires neural adaptation which reflects environmental changes. Specifically, we predict that neuronal tuning functions possess an optimal width, which increases with prior uncertainty and environmental noise, and decreases with the decoding time window. Furthermore, even for static stimuli, we demonstrate that dynamic sensory tuning functions, acting at relatively short time scales, lead to improved performance. Interestingly, the narrowing of tuning functions as a function of time was recently observed in several biological systems. Such results set the stage for a functional theory which may explain the high reliability of sensory systems, and the utility of neuronal adaptation occurring at multiple time scales.
Evolutionary Dynamic Multiobjective Optimization Via Kalman Filter Prediction.
Muruganantham, Arrchana; Tan, Kay Chen; Vadakkepat, Prahlad
2016-12-01
Evolutionary algorithms are effective in solving static multiobjective optimization problems resulting in the emergence of a number of state-of-the-art multiobjective evolutionary algorithms (MOEAs). Nevertheless, the interest in applying them to solve dynamic multiobjective optimization problems has only been tepid. Benchmark problems, appropriate performance metrics, as well as efficient algorithms are required to further the research in this field. One or more objectives may change with time in dynamic optimization problems. The optimization algorithm must be able to track the moving optima efficiently. A prediction model can learn the patterns from past experience and predict future changes. In this paper, a new dynamic MOEA using Kalman filter (KF) predictions in decision space is proposed to solve the aforementioned problems. The predictions help to guide the search toward the changed optima, thereby accelerating convergence. A scoring scheme is devised to hybridize the KF prediction with a random reinitialization method. Experimental results and performance comparisons with other state-of-the-art algorithms demonstrate that the proposed algorithm is capable of significantly improving the dynamic optimization performance.
Performance Optimization Control of ECH using Fuzzy Inference Application
NASA Astrophysics Data System (ADS)
Dubey, Abhay Kumar
Electro-chemical honing (ECH) is a hybrid electrolytic precision micro-finishing technology that, by combining physico-chemical actions of electro-chemical machining and conventional honing processes, provides the controlled functional surfaces-generation and fast material removal capabilities in a single operation. Process multi-performance optimization has become vital for utilizing full potential of manufacturing processes to meet the challenging requirements being placed on the surface quality, size, tolerances and production rate of engineering components in this globally competitive scenario. This paper presents an strategy that integrates the Taguchi matrix experimental design, analysis of variances and fuzzy inference system (FIS) to formulate a robust practical multi-performance optimization methodology for complex manufacturing processes like ECH, which involve several control variables. Two methodologies one using a genetic algorithm tuning of FIS (GA-tuned FIS) and another using an adaptive network based fuzzy inference system (ANFIS) have been evaluated for a multi-performance optimization case study of ECH. The actual experimental results confirm their potential for a wide range of machining conditions employed in ECH.
Scout: high-performance heterogeneous computing made simple
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jablin, James; Mc Cormick, Patrick; Herlihy, Maurice
2011-01-26
Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focusmore » on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.« less
McCaskie, Andrew W; Kenny, Dianna T; Deshmukh, Sandeep
2011-05-02
Trainee surgeons must acquire expert status in the context of reduced hours, reduced operating room time and the need to learn complex skills involving screen-mediated techniques, computers and robotics. Ever more sophisticated surgical simulation strategies have been helpful in providing surgeons with the opportunity to practise, but not all of these strategies are widely available. Similarities in the motor skills required in skilled musical performance and surgery suggest that models of music learning, and particularly skilled motor development, may be applicable in training surgeons. More attention should be paid to factors associated with optimal arousal and optimal performance in surgical training - lessons learned from helping anxious musicians optimise performance and manage anxiety may also be transferable to trainee surgeons. The ways in which the trainee surgeon moves from novice to expert need to be better understood so that this process can be expedited using current knowledge in other disciplines requiring the performance of complex fine motor tasks with high cognitive load under pressure.
System controls challenges of hypersonic combined-cycle engine powered vehicles
NASA Technical Reports Server (NTRS)
Morrison, Russell H.; Ianculescu, George D.
1992-01-01
Hypersonic aircraft with air-breathing engines have been described as the most complex and challenging air/space vehicle designs ever attempted. This is particularly true for aircraft designed to accelerate to orbital velocities. The propulsion system for the National Aerospace Plane will be an active factor in maintaining the aircraft on course. Typically addressed are the difficulties with the aerodynamic vehicle design and development, materials limitations and propulsion performance. The propulsion control system requires equal materials limitations and propulsion performance. The propulsion control system requires equal concern. Far more important than merely a subset of propulsion performance, the propulsion control system resides at the crossroads of trajectory optimization, engine static performance, and vehicle-engine configuration optimization. To date, solutions at these crossroads are multidisciplinary and generally lag behind the broader performance issues. Just how daunting these demands will be is suggested. A somewhat simplified treatment of the behavioral characteristics of hypersonic aircraft and the issues associated with their air-breathing propulsion control system design are presented.
A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers
NASA Technical Reports Server (NTRS)
Woodard, Stanley E.; Garg, Devendra P.
1998-01-01
This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.
Effects of Cognitive Interventions on Sports Anxiety and Performance.
ERIC Educational Resources Information Center
Murphy, Shane M.; Woolfolk, Robert L.
Oxendine (1970) hypothesized that the arousal-performance relationship varies across tasks, such that gross motor activities will require high arousal for optimal performance while fine motor activities will be facilitated by low arousal, but adversely affected by high arousal. Although the effects of preparatory arousal on strength performance…
Gaussian process regression for geometry optimization
NASA Astrophysics Data System (ADS)
Denzel, Alexander; Kästner, Johannes
2018-03-01
We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.
Joelsson, Daniel; Moravec, Phil; Troutman, Matthew; Pigeon, Joseph; DePhillips, Pete
2008-08-20
Transferring manual ELISAs to automated platforms requires optimizing the assays for each particular robotic platform. These optimization experiments are often time consuming and difficult to perform using a traditional one-factor-at-a-time strategy. In this manuscript we describe the development of an automated process using statistical design of experiments (DOE) to quickly optimize immunoassays for precision and robustness on the Tecan EVO liquid handler. By using fractional factorials and a split-plot design, five incubation time variables and four reagent concentration variables can be optimized in a short period of time.
NASA Astrophysics Data System (ADS)
Maser, Adam Charles
More electric aircraft systems, high power avionics, and a reduction in heat sink capacity have placed a larger emphasis on correctly satisfying aircraft thermal management requirements during conceptual design. Thermal management systems must be capable of dealing with these rising heat loads, while simultaneously meeting mission performance. Since all subsystem power and cooling requirements are ultimately traced back to the engine, the growing interactions between the propulsion and thermal management systems are becoming more significant. As a result, it is necessary to consider their integrated performance during the conceptual design of the aircraft gas turbine engine cycle to ensure that thermal requirements are met. This can be accomplished by using thermodynamic subsystem modeling and simulation while conducting the necessary design trades to establish the engine cycle. However, this approach also poses technical challenges associated with the existence of elaborate aircraft subsystem interactions. This research addresses these challenges through the creation of a parsimonious, transparent thermodynamic model of propulsion and thermal management systems performance with a focus on capturing the physics that have the largest impact on propulsion design choices. This modeling environment, known as Cycle Refinement for Aircraft Thermodynamically Optimized Subsystems (CRATOS), is capable of operating in on-design (parametric) and off-design (performance) modes and includes a system-level solver to enforce design constraints. A key aspect of this approach is the incorporation of physics-based formulations involving the concurrent usage of the first and second laws of thermodynamics, which are necessary to achieve a clearer view of the component-level losses across the propulsion and thermal management systems. This is facilitated by the direct prediction of the exergy destruction distribution throughout the system and the resulting quantification of available work losses over the time history of the mission. The characterization of the thermodynamic irreversibility distribution helps give the propulsion systems designer an absolute and consistent view of the tradeoffs associated with the design of the entire integrated system. Consequently, this leads directly to the question of the proper allocation of irreversibility across each of the components. The process of searching for the most favorable allocation of this irreversibility is the central theme of the research and must take into account production cost and vehicle mission performance. The production cost element is accomplished by including an engine component weight and cost prediction capability within the system model. The vehicle mission performance is obtained by directly linking the propulsion and thermal management model to a vehicle performance model and flying it through a mission profile. A canonical propulsion and thermal management systems architecture is then presented to experimentally test each element of the methodology separately: first the integrated modeling and simulation, then the irreversibility, cost, and mission performance considerations, and then finally the proper technique to perform the optimal allocation. A goal of this research is the description of the optimal allocation of system irreversibility to enable an engine cycle design with improved performance and cost at the vehicle-level. To do this, a numerical optimization was first used to minimize system-level production and operating costs by fixing the performance requirements and identifying the best settings for all of the design variables. There are two major drawbacks to this approach: It does not allow the designer to directly trade off the performance requirements and it does not allow the individual component losses to directly factor into the optimization. An irreversibility allocation approach based on the economic concept of resource allocation is then compared to the numerical optimization. By posing the problem in economic terms, exergy destruction is treated as a true common currency to barter for improved efficiency, cost, and performance. This allows the designer to clearly see how changes in the irreversibility distribution impact the overall system. The inverse design is first performed through a filtered Monte Carlo to allow the designer to view the irreversibility design space. The designer can then directly perform the allocation using the exergy destruction, which helps to place the design choices on an even thermodynamic footing. Finally, two use cases are presented to show how the irreversibility allocation approach can assist the designer. The first describes a situation where the designer can better address competing system-level requirements; the second describes a different situation where the designer can choose from a number of options to improve a system in a manner that is more robust to future requirements.
Fuzzy probabilistic design of water distribution networks
NASA Astrophysics Data System (ADS)
Fu, Guangtao; Kapelan, Zoran
2011-05-01
The primary aim of this paper is to present a fuzzy probabilistic approach for optimal design and rehabilitation of water distribution systems, combining aleatoric and epistemic uncertainties in a unified framework. The randomness and imprecision in future water consumption are characterized using fuzzy random variables whose realizations are not real but fuzzy numbers, and the nodal head requirements are represented by fuzzy sets, reflecting the imprecision in customers' requirements. The optimal design problem is formulated as a two-objective optimization problem, with minimization of total design cost and maximization of system performance as objectives. The system performance is measured by the fuzzy random reliability, defined as the probability that the fuzzy head requirements are satisfied across all network nodes. The satisfactory degree is represented by necessity measure or belief measure in the sense of the Dempster-Shafer theory of evidence. An efficient algorithm is proposed, within a Monte Carlo procedure, to calculate the fuzzy random system reliability and is effectively combined with the nondominated sorting genetic algorithm II (NSGAII) to derive the Pareto optimal design solutions. The newly proposed methodology is demonstrated with two case studies: the New York tunnels network and Hanoi network. The results from both cases indicate that the new methodology can effectively accommodate and handle various aleatoric and epistemic uncertainty sources arising from the design process and can provide optimal design solutions that are not only cost-effective but also have higher reliability to cope with severe future uncertainties.
A Subsonic Aircraft Design Optimization With Neural Network and Regression Approximators
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.; Haller, William J.
2004-01-01
The Flight-Optimization-System (FLOPS) code encountered difficulty in analyzing a subsonic aircraft. The limitation made the design optimization problematic. The deficiencies have been alleviated through use of neural network and regression approximations. The insight gained from using the approximators is discussed in this paper. The FLOPS code is reviewed. Analysis models are developed and validated for each approximator. The regression method appears to hug the data points, while the neural network approximation follows a mean path. For an analysis cycle, the approximate model required milliseconds of central processing unit (CPU) time versus seconds by the FLOPS code. Performance of the approximators was satisfactory for aircraft analysis. A design optimization capability has been created by coupling the derived analyzers to the optimization test bed CometBoards. The approximators were efficient reanalysis tools in the aircraft design optimization. Instability encountered in the FLOPS analyzer was eliminated. The convergence characteristics were improved for the design optimization. The CPU time required to calculate the optimum solution, measured in hours with the FLOPS code was reduced to minutes with the neural network approximation and to seconds with the regression method. Generation of the approximators required the manipulation of a very large quantity of data. Design sensitivity with respect to the bounds of aircraft constraints is easily generated.
NASA Technical Reports Server (NTRS)
Allan, Brian G.; Owens, Lewis R.; Lin, John C.
2006-01-01
This research will investigate the use of Design-of-Experiments (DOE) in the development of an optimal passive flow control vane design for a boundary-layer-ingesting (BLI) offset inlet in transonic flow. This inlet flow control is designed to minimize the engine fan-face distortion levels and first five Fourier harmonic half amplitudes while maximizing the inlet pressure recovery. Numerical simulations of the BLI inlet are computed using the Reynolds-averaged Navier-Stokes (RANS) flow solver, OVERFLOW, developed at NASA. These simulations are used to generate the numerical experiments for the DOE response surface model. In this investigation, two DOE optimizations were performed using a D-Optimal Response Surface model. The first DOE optimization was performed using four design factors which were vane height and angles-of-attack for two groups of vanes. One group of vanes was placed at the bottom of the inlet and a second group symmetrically on the sides. The DOE design was performed for a BLI inlet with a free-stream Mach number of 0.85 and a Reynolds number of 2 million, based on the length of the fan-face diameter, matching an experimental wind tunnel BLI inlet test. The first DOE optimization required a fifth order model having 173 numerical simulation experiments and was able to reduce the DC60 baseline distortion from 64% down to 4.4%, while holding the pressure recovery constant. A second DOE optimization was performed holding the vanes heights at a constant value from the first DOE optimization with the two vane angles-of-attack as design factors. This DOE only required a second order model fit with 15 numerical simulation experiments and reduced DC60 to 3.5% with small decreases in the fourth and fifth harmonic amplitudes. The second optimal vane design was tested at the NASA Langley 0.3- Meter Transonic Cryogenic Tunnel in a BLI inlet experiment. The experimental results showed a 80% reduction of DPCP(sub avg), the circumferential distortion level at the engine fan-face.
NASA Technical Reports Server (NTRS)
Allan, Brian G.; Owens, Lewis R., Jr.; Lin, John C.
2006-01-01
This research will investigate the use of Design-of-Experiments (DOE) in the development of an optimal passive flow control vane design for a boundary-layer-ingesting (BLI) offset inlet in transonic flow. This inlet flow control is designed to minimize the engine fan face distortion levels and first five Fourier harmonic half amplitudes while maximizing the inlet pressure recovery. Numerical simulations of the BLI inlet are computed using the Reynolds-averaged Navier-Stokes (RANS) flow solver, OVERFLOW, developed at NASA. These simulations are used to generate the numerical experiments for the DOE response surface model. In this investigation, two DOE optimizations were performed using a D-Optimal Response Surface model. The first DOE optimization was performed using four design factors which were vane height and angles-of-attack for two groups of vanes. One group of vanes was placed at the bottom of the inlet and a second group symmetrically on the sides. The DOE design was performed for a BLI inlet with a free-stream Mach number of 0.85 and a Reynolds number of 2 million, based on the length of the fan face diameter, matching an experimental wind tunnel BLI inlet test. The first DOE optimization required a fifth order model having 173 numerical simulation experiments and was able to reduce the DC60 baseline distortion from 64% down to 4.4%, while holding the pressure recovery constant. A second DOE optimization was performed holding the vanes heights at a constant value from the first DOE optimization with the two vane angles-of-attack as design factors. This DOE only required a second order model fit with 15 numerical simulation experiments and reduced DC60 to 3.5% with small decreases in the fourth and fifth harmonic amplitudes. The second optimal vane design was tested at the NASA Langley 0.3-Meter Transonic Cryogenic Tunnel in a BLI inlet experiment. The experimental results showed a 80% reduction of DPCPavg, the circumferential distortion level at the engine fan face.
NASA Technical Reports Server (NTRS)
Burcham, Frank W., Jr.; Gilyard, Glenn B.; Myers, Lawrence P.
1990-01-01
Integration of propulsion and flight control systems and their optimization offers significant performance improvements. Research programs were conducted which have developed new propulsion and flight control integration concepts, implemented designs on high-performance airplanes, demonstrated these designs in flight, and measured the performance improvements. These programs, first on the YF-12 airplane, and later on the F-15, demonstrated increased thrust, reduced fuel consumption, increased engine life, and improved airplane performance; with improvements in the 5 to 10 percent range achieved with integration and with no changes to hardware. The design, software and hardware developments, and testing requirements were shown to be practical.
NASA Technical Reports Server (NTRS)
Pilkey, W. D.; Chen, Y. H.
1974-01-01
An indirect synthesis method is used in the efficient optimal design of multi-degree of freedom, multi-design element, nonlinear, transient systems. A limiting performance analysis which requires linear programming for a kinematically linear system is presented. The system is selected using system identification methods such that the designed system responds as closely as possible to the limiting performance. The efficiency is a result of the method avoiding the repetitive systems analyses accompanying other numerical optimization methods.
NASA Astrophysics Data System (ADS)
Uhlemann, Sebastian; Wilkinson, Paul B.; Maurer, Hansruedi; Wagner, Florian M.; Johnson, Timothy C.; Chambers, Jonathan E.
2018-07-01
Within geoelectrical imaging, the choice of measurement configurations and electrode locations is known to control the image resolution. Previous work has shown that optimized survey designs can provide a model resolution that is superior to standard survey designs. This paper demonstrates a methodology to optimize resolution within a target area, while limiting the number of required electrodes, thereby selecting optimal electrode locations. This is achieved by extending previous work on the `Compare-R' algorithm, which by calculating updates to the resolution matrix optimizes the model resolution in a target area. Here, an additional weighting factor is introduced that allows to preferentially adding measurement configurations that can be acquired on a given set of electrodes. The performance of the optimization is tested on two synthetic examples and verified with a laboratory study. The effect of the weighting factor is investigated using an acquisition layout comprising a single line of electrodes. The results show that an increasing weight decreases the area of improved resolution, but leads to a smaller number of electrode positions. Imaging results superior to a standard survey design were achieved using 56 per cent fewer electrodes. The performance was also tested on a 3-D acquisition grid, where superior resolution within a target at the base of an embankment was achieved using 22 per cent fewer electrodes than a comparable standard survey. The effect of the underlying resistivity distribution on the performance of the optimization was investigated and it was shown that even strong resistivity contrasts only have minor impact. The synthetic results were verified in a laboratory tank experiment, where notable image improvements were achieved. This work shows that optimized surveys can be designed that have a resolution superior to standard survey designs, while requiring significantly fewer electrodes. This methodology thereby provides a means for improving the efficiency of geoelectrical imaging.
NASA Astrophysics Data System (ADS)
Uhlemann, Sebastian; Wilkinson, Paul B.; Maurer, Hansruedi; Wagner, Florian M.; Johnson, Timothy C.; Chambers, Jonathan E.
2018-03-01
Within geoelectrical imaging, the choice of measurement configurations and electrode locations is known to control the image resolution. Previous work has shown that optimized survey designs can provide a model resolution that is superior to standard survey designs. This paper demonstrates a methodology to optimize resolution within a target area, while limiting the number of required electrodes, thereby selecting optimal electrode locations. This is achieved by extending previous work on the `Compare-R' algorithm, which by calculating updates to the resolution matrix optimizes the model resolution in a target area. Here, an additional weighting factor is introduced that allows to preferentially adding measurement configurations that can be acquired on a given set of electrodes. The performance of the optimization is tested on two synthetic examples and verified with a laboratory study. The effect of the weighting factor is investigated using an acquisition layout comprising a single line of electrodes. The results show that an increasing weight decreases the area of improved resolution, but leads to a smaller number of electrode positions. Imaging results superior to a standard survey design were achieved using 56 per cent fewer electrodes. The performance was also tested on a 3D acquisition grid, where superior resolution within a target at the base of an embankment was achieved using 22 per cent fewer electrodes than a comparable standard survey. The effect of the underlying resistivity distribution on the performance of the optimization was investigated and it was shown that even strong resistivity contrasts only have minor impact. The synthetic results were verified in a laboratory tank experiment, where notable image improvements were achieved. This work shows that optimized surveys can be designed that have a resolution superior to standard survey designs, while requiring significantly fewer electrodes. This methodology thereby provides a means for improving the efficiency of geoelectrical imaging.
Preliminary Sizing Study of Ares-I and Ares-V Liquid Hydrogen Tanks
NASA Technical Reports Server (NTRS)
Oliver, Stanley T.; Harper, David W.
2012-01-01
A preliminary sizing study of two cryogenic propellant tanks was performed using a FORTRAN optimization program to determine weight efficient orthogrid designs for the tank barrels sections only. Various tensile and compressive failure modes were considered, including general buckling of cylinders with a shell buckling knockdown factor. Eight independent combinations of three design requirements were also considered and their effects on the tanks weight. The approach was to investigate each design case with a variable shell buckling knockdown factor, determining the most weight efficient combination of orthogrid design parameters. Numerous optimization analyses were performed, and the results presented herein compare the effects of the different design requirements and shell buckling knockdown factor. Through a series of comparisons between design requirements or shell buckling knockdown factors, the relative change in overall tank barrel weights is shown. The findings indicate that the design requirements can substantually increase the tank weight while a less conservative shell buckling knockdown factor can modestly reduce the tank weight.
Nutritional needs in the professional practice of swimming: a review
Domínguez, Raúl; Jesús-Sánchez-Oliver, Antonio; Cuenca, Eduardo; Jodra, Pablo; Fernandes da Silva, Sandro; Mata-Ordóñez, Fernando
2017-01-01
[Purpose] Swimming requires developing a high aerobic and anaerobic capacity for strength and technical efficiency. The purpose of this study was to establish the nutritional requirements and dietary strategies that can optimize swimming performance. [Methods] Several related studies retrieved from the databases, Dialnet, Elsevier, Medline, Pubmed, and Web of Science, through keyword search strategies were reviewed. [Results] The recommended carbohydrate intake ranges between 6-10-12 g/kg/d, protein 2 g/kg/d, and fat should surpass 20-25% of the daily intake. [Conclusion] Performance can be optimized with a hydration plan, as well as adequate periodization of supplements, such as caffeine, creatine, sodium bicarbonate, B-alanine, beetroot juice, Vitamin D, bovine colostrum, and HMB. PMID:29370667
Automatically updating predictive modeling workflows support decision-making in drug design.
Muegge, Ingo; Bentzien, Jörg; Mukherjee, Prasenjit; Hughes, Robert O
2016-09-01
Using predictive models for early decision-making in drug discovery has become standard practice. We suggest that model building needs to be automated with minimum input and low technical maintenance requirements. Models perform best when tailored to answering specific compound optimization related questions. If qualitative answers are required, 2-bin classification models are preferred. Integrating predictive modeling results with structural information stimulates better decision making. For in silico models supporting rapid structure-activity relationship cycles the performance deteriorates within weeks. Frequent automated updates of predictive models ensure best predictions. Consensus between multiple modeling approaches increases the prediction confidence. Combining qualified and nonqualified data optimally uses all available information. Dose predictions provide a holistic alternative to multiple individual property predictions for reaching complex decisions.
Processor design optimization methodology for synthetic vision systems
NASA Astrophysics Data System (ADS)
Wren, Bill; Tarleton, Norman G.; Symosek, Peter F.
1997-06-01
Architecture optimization requires numerous inputs from hardware to software specifications. The task of varying these input parameters to obtain an optimal system architecture with regard to cost, specified performance and method of upgrade considerably increases the development cost due to the infinitude of events, most of which cannot even be defined by any simple enumeration or set of inequalities. We shall address the use of a PC-based tool using genetic algorithms to optimize the architecture for an avionics synthetic vision system, specifically passive millimeter wave system implementation.
A design optimization process for Space Station Freedom
NASA Technical Reports Server (NTRS)
Chamberlain, Robert G.; Fox, George; Duquette, William H.
1990-01-01
The Space Station Freedom Program is used to develop and implement a process for design optimization. Because the relative worth of arbitrary design concepts cannot be assessed directly, comparisons must be based on designs that provide the same performance from the point of view of station users; such designs can be compared in terms of life cycle cost. Since the technology required to produce a space station is widely dispersed, a decentralized optimization process is essential. A formulation of the optimization process is provided and the mathematical models designed to facilitate its implementation are described.
Science requirements and optimization of the silicon pore optics design for the Athena mirror
NASA Astrophysics Data System (ADS)
Willingale, R.; Pareschi, G.; Christensen, F.; den Herder, J.-W.; Ferreira, D.; Jakobsen, A.; Ackermann, M.; Collon, M.; Bavdaz, M.
2014-07-01
The science requirements for the Athena X-ray mirror are to provide a collecting area of 2 m2 at 1 keV, an angular resolution of ~5 arc seconds half energy eidth (HEW) and a field of view of diameter 40-50 arc minutes. This combination of area and angular resolution over a wide field are possible because of unique features of the Silicon pore optics (SPO) technology used. Here we describe the optimization and modifications of the SPO technology required to achieve the Athena mirror specification and demonstrate how the optical design of the mirror system impacts on the scientific performance of Athena.
Capsule performance optimization in the National Ignition Campaigna)
NASA Astrophysics Data System (ADS)
Landen, O. L.; Boehly, T. R.; Bradley, D. K.; Braun, D. G.; Callahan, D. A.; Celliers, P. M.; Collins, G. W.; Dewald, E. L.; Divol, L.; Glenzer, S. H.; Hamza, A.; Hicks, D. G.; Hoffman, N.; Izumi, N.; Jones, O. S.; Kirkwood, R. K.; Kyrala, G. A.; Michel, P.; Milovich, J.; Munro, D. H.; Nikroo, A.; Olson, R. E.; Robey, H. F.; Spears, B. K.; Thomas, C. A.; Weber, S. V.; Wilson, D. C.; Marinak, M. M.; Suter, L. J.; Hammel, B. A.; Meyerhofer, D. D.; Atherton, J.; Edwards, J.; Haan, S. W.; Lindl, J. D.; MacGowan, B. J.; Moses, E. I.
2010-05-01
A capsule performance optimization campaign will be conducted at the National Ignition Facility [G. H. Miller et al., Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition by laser-driven hohlraums [J. D. Lindl et al., Phys. Plasmas 11, 339 (2004)]. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the OMEGA facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.
Capsule performance optimization in the National Ignition Campaign
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landen, O. L.; Bradley, D. K.; Braun, D. G.
2010-05-15
A capsule performance optimization campaign will be conducted at the National Ignition Facility [G. H. Miller et al., Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition by laser-driven hohlraums [J. D. Lindl et al., Phys. Plasmas 11, 339 (2004)]. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the OMEGA facility under scaled hohlraum and capsule conditions relevant to the ignition designmore » and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.« less
NASA Astrophysics Data System (ADS)
Armstrong, Michael James
Increases in power demands and changes in the design practices of overall equipment manufacturers has led to a new paradigm in vehicle systems definition. The development of unique power systems architectures is of increasing importance to overall platform feasibility and must be pursued early in the aircraft design process. Many vehicle systems architecture trades must be conducted concurrent to platform definition. With an increased complexity introduced during conceptual design, accurate predictions of unit level sizing requirements must be made. Architecture specific emergent requirements must be identified which arise due to the complex integrated effect of unit behaviors. Off-nominal operating scenarios present sizing critical requirements to the aircraft vehicle systems. These requirements are architecture specific and emergent. Standard heuristically defined failure mitigation is sufficient for sizing traditional and evolutionary architectures. However, architecture concepts which vary significantly in terms of structure and composition require that unique failure mitigation strategies be defined for accurate estimations of unit level requirements. Identifying of these off-nominal emergent operational requirements require extensions to traditional safety and reliability tools and the systematic identification of optimal performance degradation strategies. Discrete operational constraints posed by traditional Functional Hazard Assessment (FHA) are replaced by continuous relationships between function loss and operational hazard. These relationships pose the objective function for hazard minimization. Load shedding optimization is performed for all statistically significant failures by varying the allocation of functional capability throughout the vehicle systems architecture. Expressing hazards, and thereby, reliability requirements as continuous relationships with the magnitude and duration of functional failure requires augmentations to the traditional means for system safety assessment (SSA). The traditional two state and discrete system reliability assessment proves insufficient. Reliability is, therefore, handled in an analog fashion: as a function of magnitude of failure and failure duration. A series of metrics are introduced which characterize system performance in terms of analog hazard probabilities. These include analog and cumulative system and functional risk, hazard correlation, and extensions to the traditional component importance metrics. Continuous FHA, load shedding optimization, and analog SSA constitute the SONOMA process (Systematic Off-Nominal Requirements Analysis). Analog system safety metrics inform both architecture optimization (changes in unit level capability and reliability) and architecture augmentation (changes in architecture structure and composition). This process was applied for two vehicle systems concepts (conventional and 'more-electric') in terms of loss/hazard relationships with varying degrees of fidelity. Application of this process shows that the traditional assumptions regarding the structure of the function loss vs. hazard relationship apply undue design bias to functions and components during exploratory design. This bias is illustrated in terms of inaccurate estimations of the system and function level risk and unit level importance. It was also shown that off-nominal emergent requirements must be defined specific to each architecture concept. Quantitative comparisons of architecture specific off-nominal performance were obtained which provide evidence to the need for accurate definition of load shedding strategies during architecture exploratory design. Formally expressing performance degradation strategies in terms of the minimization of a continuous hazard space enhances the system architects ability to accurately predict sizing critical emergent requirements concurrent to architecture definition. Furthermore, the methods and frameworks generated here provide a structured and flexible means for eliciting these architecture specific requirements during the performance of architecture trades.
Balancing on tightropes and slacklines
Paoletti, P.; Mahadevan, L.
2012-01-01
Balancing on a tightrope or a slackline is an example of a neuromechanical task where the whole body both drives and responds to the dynamics of the external environment, often on multiple timescales. Motivated by a range of neurophysiological observations, here we formulate a minimal model for this system and use optimal control theory to design a strategy for maintaining an upright position. Our analysis of the open and closed-loop dynamics shows the existence of an optimal rope sag where balancing requires minimal effort, consistent with qualitative observations and suggestive of strategies for optimizing balancing performance while standing and walking. Our consideration of the effects of nonlinearities, potential parameter coupling and delays on the overall performance shows that although these factors change the results quantitatively, the existence of an optimal strategy persists. PMID:22513724
Computational methods for aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Peeters, M. F.
1983-01-01
Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.
NASA Technical Reports Server (NTRS)
Adams, J. R.; Hawley, S. W.; Peterson, G. R.; Salinger, S. S.; Workman, R. A.
1971-01-01
A hardware and software specification covering requirements for the computer enhancement of structural weld radiographs was considered. Three scanning systems were used to digitize more than 15 weld radiographs. The performance of these systems was evaluated by determining modulation transfer functions and noise characteristics. Enhancement techniques were developed and applied to the digitized radiographs. The scanning parameters of spot size and spacing and film density were studied to optimize the information content of the digital representation of the image.
An approach for configuring space photovoltaic tandem arrays based on cell layer performance
NASA Technical Reports Server (NTRS)
Flora, C. S.; Dillard, P. A.
1991-01-01
Meeting solar array performance goals of 300 W/Kg requires use of solar cells with orbital efficiencies greater than 20 percent. Only multijunction cells and cell layers operating in tandem produce this required efficiency. An approach for defining solar array design concepts that use tandem cell layers involve the following: transforming cell layer performance at standard test conditions to on-orbit performance; optimizing circuit configuration with tandem cell layers; evaluating circuit sensitivity to cell current mismatch; developing array electrical design around selected circuit; and predicting array orbital performance including seasonal variations.
Optimizing Requirements Decisions with KEYS
NASA Technical Reports Server (NTRS)
Jalali, Omid; Menzies, Tim; Feather, Martin
2008-01-01
Recent work with NASA's Jet Propulsion Laboratory has allowed for external access to five of JPL's real-world requirements models, anonymized to conceal proprietary information, but retaining their computational nature. Experimentation with these models, reported herein, demonstrates a dramatic speedup in the computations performed on them. These models have a well defined goal: select mitigations that retire risks which, in turn, increases the number of attainable requirements. Such a non-linear optimization is a well-studied problem. However identification of not only (a) the optimal solution(s) but also (b) the key factors leading to them is less well studied. Our technique, called KEYS, shows a rapid way of simultaneously identifying the solutions and their key factors. KEYS improves on prior work by several orders of magnitude. Prior experiments with simulated annealing or treatment learning took tens of minutes to hours to terminate. KEYS runs much faster than that; e.g for one model, KEYS ran 13,000 times faster than treatment learning (40 minutes versus 0.18 seconds). Processing these JPL models is a non-linear optimization problem: the fewest mitigations must be selected while achieving the most requirements. Non-linear optimization is a well studied problem. With this paper, we challenge other members of the PROMISE community to improve on our results with other techniques.
Formulation for Simultaneous Aerodynamic Analysis and Design Optimization
NASA Technical Reports Server (NTRS)
Hou, G. W.; Taylor, A. C., III; Mani, S. V.; Newman, P. A.
1993-01-01
An efficient approach for simultaneous aerodynamic analysis and design optimization is presented. This approach does not require the performance of many flow analyses at each design optimization step, which can be an expensive procedure. Thus, this approach brings us one step closer to meeting the challenge of incorporating computational fluid dynamic codes into gradient-based optimization techniques for aerodynamic design. An adjoint-variable method is introduced to nullify the effect of the increased number of design variables in the problem formulation. The method has been successfully tested on one-dimensional nozzle flow problems, including a sample problem with a normal shock. Implementations of the above algorithm are also presented that incorporate Newton iterations to secure a high-quality flow solution at the end of the design process. Implementations with iterative flow solvers are possible and will be required for large, multidimensional flow problems.
Evolutionary Bi-objective Optimization for Bulldozer and Its Blade in Soil Cutting
NASA Astrophysics Data System (ADS)
Sharma, Deepak; Barakat, Nada
2018-02-01
An evolutionary optimization approach is adopted in this paper for simultaneously achieving the economic and productive soil cutting. The economic aspect is defined by minimizing the power requirement from the bulldozer, and the soil cutting is made productive by minimizing the time of soil cutting. For determining the power requirement, two force models are adopted from the literature to quantify the cutting force on the blade. Three domain-specific constraints are also proposed, which are limiting the power from the bulldozer, limiting the maximum force on the bulldozer blade and achieving the desired production rate. The bi-objective optimization problem is solved using five benchmark multi-objective evolutionary algorithms and one classical optimization technique using the ɛ-constraint method. The Pareto-optimal solutions are obtained with the knee-region. Further, the post-optimal analysis is performed on the obtained solutions to decipher relationships among the objectives and decision variables. Such relationships are later used for making guidelines for selecting the optimal set of input parameters. The obtained results are then compared with the experiment results from the literature that show a close agreement among them.
Topology-optimized metasurfaces: impact of initial geometric layout.
Yang, Jianji; Fan, Jonathan A
2017-08-15
Topology optimization is a powerful iterative inverse design technique in metasurface engineering and can transform an initial layout into a high-performance device. With this method, devices are optimized within a local design phase space, making the identification of suitable initial geometries essential. In this Letter, we examine the impact of initial geometric layout on the performance of large-angle (75 deg) topology-optimized metagrating deflectors. We find that when conventional metasurface designs based on dielectric nanoposts are used as initial layouts for topology optimization, the final devices have efficiencies around 65%. In contrast, when random initial layouts are used, the final devices have ultra-high efficiencies that can reach 94%. Our numerical experiments suggest that device topologies based on conventional metasurface designs may not be suitable to produce ultra-high-efficiency, large-angle metasurfaces. Rather, initial geometric layouts with non-trivial topologies and shapes are required.
Simulation Research on Vehicle Active Suspension Controller Based on G1 Method
NASA Astrophysics Data System (ADS)
Li, Gen; Li, Hang; Zhang, Shuaiyang; Luo, Qiuhui
2017-09-01
Based on the order relation analysis method (G1 method), the optimal linear controller of vehicle active suspension is designed. The system of the main and passive suspension of the single wheel vehicle is modeled and the system input signal model is determined. Secondly, the system motion state space equation is established by the kinetic knowledge and the optimal linear controller design is completed with the optimal control theory. The weighting coefficient of the performance index coefficients of the main passive suspension is determined by the relational analysis method. Finally, the model is simulated in Simulink. The simulation results show that: the optimal weight value is determined by using the sequence relation analysis method under the condition of given road conditions, and the vehicle acceleration, suspension stroke and tire motion displacement are optimized to improve the comprehensive performance of the vehicle, and the active control is controlled within the requirements.
Optimization of dynamic soaring maneuvers to enhance endurance of a versatile UAV
NASA Astrophysics Data System (ADS)
Mir, Imran; Maqsood, Adnan; Akhtar, Suhail
2017-06-01
Dynamic soaring is a process of acquiring energy available in atmospheric wind shears and is commonly exhibited by soaring birds to perform long distance flights. This paper aims to demonstrate a viable algorithm which can be implemented in near real time environment to formulate optimal trajectories for dynamic soaring maneuvers for a small scale Unmanned Aerial Vehicle (UAV). The objective is to harness maximum energy from atmosphere wind shear to improve loiter time for Intelligence, Surveillance and Reconnaissance (ISR) missions. Three-dimensional point-mass UAV equations of motion and linear wind gradient profile are used to model flight dynamics. Utilizing UAV states, controls, operational constraints, initial and terminal conditions that enforce a periodic flight, dynamic soaring problem is formulated as an optimal control problem. Optimized trajectories of the maneuver are subsequently generated employing pseudo spectral techniques against distant UAV performance parameters. The discussion also encompasses the requirement for generation of optimal trajectories for dynamic soaring in real time environment and the ability of the proposed algorithm for speedy solution generation. Coupled with the fact that dynamic soaring is all about immediately utilizing the available energy from the wind shear encountered, the proposed algorithm promises its viability for practical on board implementations requiring computation of trajectories in near real time.
Sensitivity of Space Station alpha joint robust controller to structural modal parameter variations
NASA Technical Reports Server (NTRS)
Kumar, Renjith R.; Cooper, Paul A.; Lim, Tae W.
1991-01-01
The photovoltaic array sun tracking control system of Space Station Freedom is described. A synthesis procedure for determining optimized values of the design variables of the control system is developed using a constrained optimization technique. The synthesis is performed to provide a given level of stability margin, to achieve the most responsive tracking performance, and to meet other design requirements. Performance of the baseline design, which is synthesized using predicted structural characteristics, is discussed and the sensitivity of the stability margin is examined for variations of the frequencies, mode shapes and damping ratios of dominant structural modes. The design provides enough robustness to tolerate a sizeable error in the predicted modal parameters. A study was made of the sensitivity of performance indicators as the modal parameters of the dominant modes vary. The design variables are resynthesized for varying modal parameters in order to achieve the most responsive tracking performance while satisfying the design requirements. This procedure of reoptimization design parameters would be useful in improving the control system performance if accurate model data are provided.
New approaches to optimization in aerospace conceptual design
NASA Technical Reports Server (NTRS)
Gage, Peter J.
1995-01-01
Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.
Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments
Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361
Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.
Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.
Robust Control Design for Systems With Probabilistic Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a reliability- and robustness-based formulation for robust control synthesis for systems with probabilistic uncertainty. In a reliability-based formulation, the probability of violating design requirements prescribed by inequality constraints is minimized. In a robustness-based formulation, a metric which measures the tendency of a random variable/process to cluster close to a target scalar/function is minimized. A multi-objective optimization procedure, which combines stability and performance requirements in time and frequency domains, is used to search for robustly optimal compensators. Some of the fundamental differences between the proposed strategy and conventional robust control methods are: (i) unnecessary conservatism is eliminated since there is not need for convex supports, (ii) the most likely plants are favored during synthesis allowing for probabilistic robust optimality, (iii) the tradeoff between robust stability and robust performance can be explored numerically, (iv) the uncertainty set is closely related to parameters with clear physical meaning, and (v) compensators with improved robust characteristics for a given control structure can be synthesized.
NASA Astrophysics Data System (ADS)
Culley, S.; Noble, S.; Yates, A.; Timbs, M.; Westra, S.; Maier, H. R.; Giuliani, M.; Castelletti, A.
2016-09-01
Many water resource systems have been designed assuming that the statistical characteristics of future inflows are similar to those of the historical record. This assumption is no longer valid due to large-scale changes in the global climate, potentially causing declines in water resource system performance, or even complete system failure. Upgrading system infrastructure to cope with climate change can require substantial financial outlay, so it might be preferable to optimize existing system performance when possible. This paper builds on decision scaling theory by proposing a bottom-up approach to designing optimal feedback control policies for a water system exposed to a changing climate. This approach not only describes optimal operational policies for a range of potential climatic changes but also enables an assessment of a system's upper limit of its operational adaptive capacity, beyond which upgrades to infrastructure become unavoidable. The approach is illustrated using the Lake Como system in Northern Italy—a regulated system with a complex relationship between climate and system performance. By optimizing system operation under different hydrometeorological states, it is shown that the system can continue to meet its minimum performance requirements for more than three times as many states as it can under current operations. Importantly, a single management policy, no matter how robust, cannot fully utilize existing infrastructure as effectively as an ensemble of flexible management policies that are updated as the climate changes.
SPOKES: An end-to-end simulation facility for spectroscopic cosmological surveys
Nord, B.; Amara, A.; Refregier, A.; ...
2016-03-03
The nature of dark matter, dark energy and large-scale gravity pose some of the most pressing questions in cosmology today. These fundamental questions require highly precise measurements, and a number of wide-field spectroscopic survey instruments are being designed to meet this requirement. A key component in these experiments is the development of a simulation tool to forecast science performance, define requirement flow-downs, optimize implementation, demonstrate feasibility, and prepare for exploitation. We present SPOKES (SPectrOscopic KEn Simulation), an end-to-end simulation facility for spectroscopic cosmological surveys designed to address this challenge. SPOKES is based on an integrated infrastructure, modular function organization, coherentmore » data handling and fast data access. These key features allow reproducibility of pipeline runs, enable ease of use and provide flexibility to update functions within the pipeline. The cyclic nature of the pipeline offers the possibility to make the science output an efficient measure for design optimization and feasibility testing. We present the architecture, first science, and computational performance results of the simulation pipeline. The framework is general, but for the benchmark tests, we use the Dark Energy Spectrometer (DESpec), one of the early concepts for the upcoming project, the Dark Energy Spectroscopic Instrument (DESI). As a result, we discuss how the SPOKES framework enables a rigorous process to optimize and exploit spectroscopic survey experiments in order to derive high-precision cosmological measurements optimally.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nord, B.; Amara, A.; Refregier, A.
The nature of dark matter, dark energy and large-scale gravity pose some of the most pressing questions in cosmology today. These fundamental questions require highly precise measurements, and a number of wide-field spectroscopic survey instruments are being designed to meet this requirement. A key component in these experiments is the development of a simulation tool to forecast science performance, define requirement flow-downs, optimize implementation, demonstrate feasibility, and prepare for exploitation. We present SPOKES (SPectrOscopic KEn Simulation), an end-to-end simulation facility for spectroscopic cosmological surveys designed to address this challenge. SPOKES is based on an integrated infrastructure, modular function organization, coherentmore » data handling and fast data access. These key features allow reproducibility of pipeline runs, enable ease of use and provide flexibility to update functions within the pipeline. The cyclic nature of the pipeline offers the possibility to make the science output an efficient measure for design optimization and feasibility testing. We present the architecture, first science, and computational performance results of the simulation pipeline. The framework is general, but for the benchmark tests, we use the Dark Energy Spectrometer (DESpec), one of the early concepts for the upcoming project, the Dark Energy Spectroscopic Instrument (DESI). As a result, we discuss how the SPOKES framework enables a rigorous process to optimize and exploit spectroscopic survey experiments in order to derive high-precision cosmological measurements optimally.« less
Impact of Aerodynamics and Structures Technology on Heavy Lift Tiltrotors
NASA Technical Reports Server (NTRS)
Acree, C. W., Jr.
2006-01-01
Rotor performance and aeroelastic stability are presented for a 124,000-lb Large Civil Tilt Rotor (LCTR) design. It was designed to carry 120 passengers for 1200 nm, with performance of 350 knots at 30,000 ft altitude. Design features include a low-mounted wing and hingeless rotors, with a very low cruise tip speed of 350 ft/sec. The rotor and wing design processes are described, including rotor optimization methods and wing/rotor aeroelastic stability analyses. New rotor airfoils were designed specifically for the LCTR; the resulting performance improvements are compared to current technology airfoils. Twist, taper and precone optimization are presented, along with the effects of blade flexibility on performance. A new wing airfoil was designed and a composite structure was developed to meet the wing load requirements for certification. Predictions of aeroelastic stability are presented for the optimized rotor and wing, along with summaries of the effects of rotor design parameters on stability.
Su, Weixing; Chen, Hanning; Liu, Fang; Lin, Na; Jing, Shikai; Liang, Xiaodan; Liu, Wei
2017-03-01
There are many dynamic optimization problems in the real world, whose convergence and searching ability is cautiously desired, obviously different from static optimization cases. This requires an optimization algorithm adaptively seek the changing optima over dynamic environments, instead of only finding the global optimal solution in the static environment. This paper proposes a novel comprehensive learning artificial bee colony optimizer (CLABC) for optimization in dynamic environments problems, which employs a pool of optimal foraging strategies to balance the exploration and exploitation tradeoff. The main motive of CLABC is to enrich artificial bee foraging behaviors in the ABC model by combining Powell's pattern search method, life-cycle, and crossover-based social learning strategy. The proposed CLABC is a more bee-colony-realistic model that the bee can reproduce and die dynamically throughout the foraging process and population size varies as the algorithm runs. The experiments for evaluating CLABC are conducted on the dynamic moving peak benchmarks. Furthermore, the proposed algorithm is applied to a real-world application of dynamic RFID network optimization. Statistical analysis of all these cases highlights the significant performance improvement due to the beneficial combination and demonstrates the performance superiority of the proposed algorithm.
Performance optimization for rotors in hover and axial flight
NASA Technical Reports Server (NTRS)
Quackenbush, T. R.; Wachspress, D. A.; Kaufman, A. E.; Bliss, D. B.
1989-01-01
Performance optimization for rotors in hover and axial flight is a topic of continuing importance to rotorcraft designers. The aim of this Phase 1 effort has been to demonstrate that a linear optimization algorithm could be coupled to an existing influence coefficient hover performance code. This code, dubbed EHPIC (Evaluation of Hover Performance using Influence Coefficients), uses a quasi-linear wake relaxation to solve for the rotor performance. The coupling was accomplished by expanding of the matrix of linearized influence coefficients in EHPIC to accommodate design variables and deriving new coefficients for linearized equations governing perturbations in power and thrust. These coefficients formed the input to a linear optimization analysis, which used the flow tangency conditions on the blade and in the wake to impose equality constraints on the expanded system of equations; user-specified inequality contraints were also employed to bound the changes in the design. It was found that this locally linearized analysis could be invoked to predict a design change that would produce a reduction in the power required by the rotor at constant thrust. Thus, an efficient search for improved versions of the baseline design can be carried out while retaining the accuracy inherent in a free wake/lifting surface performance analysis.
Robust design of configurations and parameters of adaptable products
NASA Astrophysics Data System (ADS)
Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua
2014-03-01
An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.
Adapted all-numerical correlator for face recognition applications
NASA Astrophysics Data System (ADS)
Elbouz, M.; Bouzidi, F.; Alfalou, A.; Brosseau, C.; Leonard, I.; Benkelfat, B.-E.
2013-03-01
In this study, we suggest and validate an all-numerical implementation of a VanderLugt correlator which is optimized for face recognition applications. The main goal of this implementation is to take advantage of the benefits (detection, localization, and identification of a target object within a scene) of correlation methods and exploit the reconfigurability of numerical approaches. This technique requires a numerical implementation of the optical Fourier transform. We pay special attention to adapt the correlation filter to this numerical implementation. One main goal of this work is to reduce the size of the filter in order to decrease the memory space required for real time applications. To fulfil this requirement, we code the reference images with 8 bits and study the effect of this coding on the performances of several composite filters (phase-only filter, binary phase-only filter). The saturation effect has for effect to decrease the performances of the correlator for making a decision when filters contain up to nine references. Further, an optimization is proposed based for an optimized segmented composite filter. Based on this approach, we present tests with different faces demonstrating that the above mentioned saturation effect is significantly reduced while minimizing the size of the learning data base.
The Inverse Optimal Control Problem for a Three-Loop Missile Autopilot
NASA Astrophysics Data System (ADS)
Hwang, Donghyeok; Tahk, Min-Jea
2018-04-01
The performance characteristics of the autopilot must have a fast response to intercept a maneuvering target and reasonable robustness for system stability under the effect of un-modeled dynamics and noise. By the conventional approach, the three-loop autopilot design is handled by time constant, damping factor and open-loop crossover frequency to achieve the desired performance requirements. Note that the general optimal theory can be also used to obtain the same gain as obtained from the conventional approach. The key idea of using optimal control technique for feedback gain design revolves around appropriate selection and interpretation of the performance index for which the control is optimal. This paper derives an explicit expression, which relates the weight parameters appearing in the quadratic performance index to the design parameters such as open-loop crossover frequency, phase margin, damping factor, or time constant, etc. Since all set of selection of design parameters do not guarantee existence of optimal control law, explicit inequalities, which are named the optimality criteria for the three-loop autopilot (OC3L), are derived to find out all set of design parameters for which the control law is optimal. Finally, based on OC3L, an efficient gain selection procedure is developed, where time constant is set to design objective and open-loop crossover frequency and phase margin as design constraints. The effectiveness of the proposed technique is illustrated through numerical simulations.
Optimization of a chemical identification algorithm
NASA Astrophysics Data System (ADS)
Chyba, Thomas H.; Fisk, Brian; Gunning, Christin; Farley, Kevin; Polizzi, Amber; Baughman, David; Simpson, Steven; Slamani, Mohamed-Adel; Almassy, Robert; Da Re, Ryan; Li, Eunice; MacDonald, Steve; Slamani, Ahmed; Mitchell, Scott A.; Pendell-Jones, Jay; Reed, Timothy L.; Emge, Darren
2010-04-01
A procedure to evaluate and optimize the performance of a chemical identification algorithm is presented. The Joint Contaminated Surface Detector (JCSD) employs Raman spectroscopy to detect and identify surface chemical contamination. JCSD measurements of chemical warfare agents, simulants, toxic industrial chemicals, interferents and bare surface backgrounds were made in the laboratory and under realistic field conditions. A test data suite, developed from these measurements, is used to benchmark algorithm performance throughout the improvement process. In any one measurement, one of many possible targets can be present along with interferents and surfaces. The detection results are expressed as a 2-category classification problem so that Receiver Operating Characteristic (ROC) techniques can be applied. The limitations of applying this framework to chemical detection problems are discussed along with means to mitigate them. Algorithmic performance is optimized globally using robust Design of Experiments and Taguchi techniques. These methods require figures of merit to trade off between false alarms and detection probability. Several figures of merit, including the Matthews Correlation Coefficient and the Taguchi Signal-to-Noise Ratio are compared. Following the optimization of global parameters which govern the algorithm behavior across all target chemicals, ROC techniques are employed to optimize chemical-specific parameters to further improve performance.
DOT National Transportation Integrated Search
2006-01-01
The implementation of an effective performance-based construction quality management requires a tool for determining impacts of construction quality on the life-cycle performance of pavements. This report presents an update on the efforts in the deve...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, S. B., E-mail: sbroy@rrcat.gov.in; Myneni, G. R., E-mail: rao@jlab.org
2015-12-04
We address the issue of qualifications of the niobium materials to be used for superconducting radio frequency (SCRF) cavity fabrications, from the point of view of a condensed matter physicist/materials scientist. We focus on the particular materials properties of niobium required for the functioning a SCRF cavity, and how to optimize the same properties for the best SCRF cavity performance in a reproducible manner. In this way the niobium materials will not necessarily be characterized by their purity alone, but in terms of those materials properties, which will define the limit of the SCRF cavity performance and also other relatedmore » material properties, which will help to sustain this best SCRF cavity performance. Furthermore we point out the need of standardization of the post fabrication processing of the niobium-SCRF cavities, which does not impair the optimized superconducting and thermal properties of the starting niobium-materials required for the reproducible performance of the SCRF cavities according to the design values.« less
Choosing Sensor Configuration for a Flexible Structure Using Full Control Synthesis
NASA Technical Reports Server (NTRS)
Lind, Rick; Nalbantoglu, Volkan; Balas, Gary
1997-01-01
Optimal locations and types for feedback sensors which meet design constraints and control requirements are difficult to determine. This paper introduces an approach to choosing a sensor configuration based on Full Control synthesis. A globally optimal Full Control compensator is computed for each member of a set of sensor configurations which are feasible for the plant. The sensor configuration associated with the Full Control system achieving the best closed-loop performance is chosen for feedback measurements to an output feedback controller. A flexible structure is used as an example to demonstrate this procedure. Experimental results show sensor configurations chosen to optimize the Full Control performance are effective for output feedback controllers.
Optimization of wave rotors for use as gas turbine engine topping cycles
NASA Technical Reports Server (NTRS)
Wilson, Jack; Paxson, Daniel E.
1995-01-01
Use of a wave rotor as a topping cycle for a gas turbine engine can improve specific power and reduce specific fuel consumption. Maximum improvement requires the wave rotor to be optimized for best performance at the mass flow of the engine. The optimization is a trade-off between losses due to friction and passage opening time, and rotational effects. An experimentally validated, one-dimensional CFD code, which includes these effects, has been used to calculate wave rotor performance, and find the optimum configuration. The technique is described, and results given for wave rotors sized for engines with sea level mass flows of 4, 26, and 400 lb/sec.
NASA Astrophysics Data System (ADS)
Oliveira, Miguel; Santos, Cristina P.; Costa, Lino
2012-09-01
In this paper, a study based on sensitivity analysis is performed for a gait multi-objective optimization system that combines bio-inspired Central Patterns Generators (CPGs) and a multi-objective evolutionary algorithm based on NSGA-II. In this system, CPGs are modeled as autonomous differential equations, that generate the necessary limb movement to perform the required walking gait. In order to optimize the walking gait, a multi-objective problem with three conflicting objectives is formulated: maximization of the velocity, the wide stability margin and the behavioral diversity. The experimental results highlight the effectiveness of this multi-objective approach and the importance of the objectives to find different walking gait solutions for the quadruped robot.
Global Design Optimization for Fluid Machinery Applications
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa
2000-01-01
Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.
Mission and system optimization of nuclear electric propulsion vehicles for lunar and Mars missions
NASA Technical Reports Server (NTRS)
Gilland, James H.
1991-01-01
The detailed mission and system optimization of low thrust electric propulsion missions is a complex, iterative process involving interaction between orbital mechanics and system performance. Through the use of appropriate approximations, initial system optimization and analysis can be performed for a range of missions. The intent of these calculations is to provide system and mission designers with simple methods to assess system design without requiring access or detailed knowledge of numerical calculus of variations optimizations codes and methods. Approximations for the mission/system optimization of Earth orbital transfer and Mars mission have been derived. Analyses include the variation of thruster efficiency with specific impulse. Optimum specific impulse, payload fraction, and power/payload ratios are calculated. The accuracy of these methods is tested and found to be reasonable for initial scoping studies. Results of optimization for Space Exploration Initiative lunar cargo and Mars missions are presented for a range of power system and thruster options.
Advanced Solar Cells for Satellite Power Systems
NASA Technical Reports Server (NTRS)
Flood, Dennis J.; Weinberg, Irving
1994-01-01
The multiple natures of today's space missions with regard to operational lifetime, orbital environment, cost and size of spacecraft, to name just a few, present such a broad range of performance requirements to be met by the solar array that no single design can suffice to meet them all. The result is a demand for development of specialized solar cell types that help to optimize overall satellite performance within a specified cost range for any given space mission. Historically, space solar array performance has been optimized for a given mission by tailoring the features of silicon solar cells to account for the orbital environment and average operating conditions expected during the mission. It has become necessary to turn to entirely new photovoltaic materials and device designs to meet the requirements of future missions, both in the near and far term. This paper will outline some of the mission drivers and resulting performance requirements that must be met by advanced solar cells, and provide an overview of some of the advanced cell technologies under development to meet them. The discussion will include high efficiency, radiation hard single junction cells; monolithic and mechanically stacked multiple bandgap cells; and thin film cells.
Decision Making in Concurrent Multitasking: Do People Adapt to Task Interference?
Nijboer, Menno; Taatgen, Niels A.; Brands, Annelies; Borst, Jelmer P.; van Rijn, Hedderik
2013-01-01
While multitasking has received a great deal of attention from researchers, we still know little about how well people adapt their behavior to multitasking demands. In three experiments, participants were presented with a multicolumn subtraction task, which required working memory in half of the trials. This primary task had to be combined with a secondary task requiring either working memory or visual attention, resulting in different types of interference. Before each trial, participants were asked to choose which secondary task they wanted to perform concurrently with the primary task. We predicted that if people seek to maximize performance or minimize effort required to perform the dual task, they choose task combinations that minimize interference. While performance data showed that the predicted optimal task combinations indeed resulted in minimal interference between tasks, the preferential choice data showed that a third of participants did not show any adaptation, and for the remainder it took a considerable number of trials before the optimal task combinations were chosen consistently. On the basis of these results we argue that, while in principle people are able to adapt their behavior according to multitasking demands, selection of the most efficient combination of strategies is not an automatic process. PMID:24244527
Advanced solar cells for satellite power systems
NASA Astrophysics Data System (ADS)
Flood, Dennis J.; Weinberg, Irving
1994-11-01
The multiple natures of today's space missions with regard to operational lifetime, orbital environment, cost and size of spacecraft, to name just a few, present such a broad range of performance requirements to be met by the solar array that no single design can suffice to meet them all. The result is a demand for development of specialized solar cell types that help to optimize overall satellite performance within a specified cost range for any given space mission. Historically, space solar array performance has been optimized for a given mission by tailoring the features of silicon solar cells to account for the orbital environment and average operating conditions expected during the mission. It has become necessary to turn to entirely new photovoltaic materials and device designs to meet the requirements of future missions, both in the near and far term. This paper will outline some of the mission drivers and resulting performance requirements that must be met by advanced solar cells, and provide an overview of some of the advanced cell technologies under development to meet them. The discussion will include high efficiency, radiation hard single junction cells; monolithic and mechanically stacked multiple bandgap cells; and thin film cells.
Fu, Xingang; Li, Shuhui; Fairbank, Michael; Wunsch, Donald C; Alonso, Eduardo
2015-09-01
This paper investigates how to train a recurrent neural network (RNN) using the Levenberg-Marquardt (LM) algorithm as well as how to implement optimal control of a grid-connected converter (GCC) using an RNN. To successfully and efficiently train an RNN using the LM algorithm, a new forward accumulation through time (FATT) algorithm is proposed to calculate the Jacobian matrix required by the LM algorithm. This paper explores how to incorporate FATT into the LM algorithm. The results show that the combination of the LM and FATT algorithms trains RNNs better than the conventional backpropagation through time algorithm. This paper presents an analytical study on the optimal control of GCCs, including theoretically ideal optimal and suboptimal controllers. To overcome the inapplicability of the optimal GCC controller under practical conditions, a new RNN controller with an improved input structure is proposed to approximate the ideal optimal controller. The performance of an ideal optimal controller and a well-trained RNN controller was compared in close to real-life power converter switching environments, demonstrating that the proposed RNN controller can achieve close to ideal optimal control performance even under low sampling rate conditions. The excellent performance of the proposed RNN controller under challenging and distorted system conditions further indicates the feasibility of using an RNN to approximate optimal control in practical applications.
Acoustic attenuation design requirements established through EPNL parametric trades
NASA Technical Reports Server (NTRS)
Veldman, H. F.
1972-01-01
An optimization procedure for the provision of an acoustic lining configuration that is balanced with respect to engine performance losses and lining attenuation characteristics was established using a method which determined acoustic attenuation design requirements through parametric trade studies using the subjective noise unit of effective perceived noise level (EPNL).
NASA Astrophysics Data System (ADS)
Boughari, Yamina
New methodologies have been developed to optimize the integration, testing and certification of flight control systems, an expensive process in the aerospace industry. This thesis investigates the stability of the Cessna Citation X aircraft without control, and then optimizes two different flight controllers from design to validation. The aircraft's model was obtained from the data provided by the Research Aircraft Flight Simulator (RAFS) of the Cessna Citation business aircraft. To increase the stability and control of aircraft systems, optimizations of two different flight control designs were performed: 1) the Linear Quadratic Regulation and the Proportional Integral controllers were optimized using the Differential Evolution algorithm and the level 1 handling qualities as the objective function. The results were validated for the linear and nonlinear aircraft models, and some of the clearance criteria were investigated; and 2) the Hinfinity control method was applied on the stability and control augmentation systems. To minimize the time required for flight control design and its validation, an optimization of the controllers design was performed using the Differential Evolution (DE), and the Genetic algorithms (GA). The DE algorithm proved to be more efficient than the GA. New tools for visualization of the linear validation process were also developed to reduce the time required for the flight controller assessment. Matlab software was used to validate the different optimization algorithms' results. Research platforms of the aircraft's linear and nonlinear models were developed, and compared with the results of flight tests performed on the Research Aircraft Flight Simulator. Some of the clearance criteria of the optimized H-infinity flight controller were evaluated, including its linear stability, eigenvalues, and handling qualities criteria. Nonlinear simulations of the maneuvers criteria were also investigated during this research to assess the Cessna Citation X's flight controller clearance, and therefore, for its anticipated certification.
Evolutionary Algorithm Based Feature Optimization for Multi-Channel EEG Classification.
Wang, Yubo; Veluvolu, Kalyana C
2017-01-01
The most BCI systems that rely on EEG signals employ Fourier based methods for time-frequency decomposition for feature extraction. The band-limited multiple Fourier linear combiner is well-suited for such band-limited signals due to its real-time applicability. Despite the improved performance of these techniques in two channel settings, its application in multiple-channel EEG is not straightforward and challenging. As more channels are available, a spatial filter will be required to eliminate the noise and preserve the required useful information. Moreover, multiple-channel EEG also adds the high dimensionality to the frequency feature space. Feature selection will be required to stabilize the performance of the classifier. In this paper, we develop a new method based on Evolutionary Algorithm (EA) to solve these two problems simultaneously. The real-valued EA encodes both the spatial filter estimates and the feature selection into its solution and optimizes it with respect to the classification error. Three Fourier based designs are tested in this paper. Our results show that the combination of Fourier based method with covariance matrix adaptation evolution strategy (CMA-ES) has the best overall performance.
Spacelab mission dependent training parametric resource requirements study
NASA Technical Reports Server (NTRS)
Ogden, D. H.; Watters, H.; Steadman, J.; Conrad, L.
1976-01-01
Training flows were developed for typical missions, resource relationships analyzed, and scheduling optimization algorithms defined. Parametric analyses were performed to study the effect of potential changes in mission model, mission complexity and training time required on the resource quantities required to support training of payload or mission specialists. Typical results of these analyses are presented both in graphic and tabular form.
NASA Astrophysics Data System (ADS)
Saponara, M.; Tramutola, A.; Creten, P.; Hardy, J.; Philippe, C.
2013-08-01
Optimization-based control techniques such as Model Predictive Control (MPC) are considered extremely attractive for space rendezvous, proximity operations and capture applications that require high level of autonomy, optimal path planning and dynamic safety margins. Such control techniques require high-performance computational needs for solving large optimization problems. The development and implementation in a flight representative avionic architecture of a MPC based Guidance, Navigation and Control system has been investigated in the ESA R&T study “On-line Reconfiguration Control System and Avionics Architecture” (ORCSAT) of the Aurora programme. The paper presents the baseline HW and SW avionic architectures, and verification test results obtained with a customised RASTA spacecraft avionics development platform from Aeroflex Gaisler.
Fog computing job scheduling optimization based on bees swarm
NASA Astrophysics Data System (ADS)
Bitam, Salim; Zeadally, Sherali; Mellouk, Abdelhamid
2018-04-01
Fog computing is a new computing architecture, composed of a set of near-user edge devices called fog nodes, which collaborate together in order to perform computational services such as running applications, storing an important amount of data, and transmitting messages. Fog computing extends cloud computing by deploying digital resources at the premise of mobile users. In this new paradigm, management and operating functions, such as job scheduling aim at providing high-performance, cost-effective services requested by mobile users and executed by fog nodes. We propose a new bio-inspired optimization approach called Bees Life Algorithm (BLA) aimed at addressing the job scheduling problem in the fog computing environment. Our proposed approach is based on the optimized distribution of a set of tasks among all the fog computing nodes. The objective is to find an optimal tradeoff between CPU execution time and allocated memory required by fog computing services established by mobile users. Our empirical performance evaluation results demonstrate that the proposal outperforms the traditional particle swarm optimization and genetic algorithm in terms of CPU execution time and allocated memory.
GA-optimization for rapid prototype system demonstration
NASA Technical Reports Server (NTRS)
Kim, Jinwoo; Zeigler, Bernard P.
1994-01-01
An application of the Genetic Algorithm (GA) is discussed. A novel scheme of Hierarchical GA was developed to solve complicated engineering problems which require optimization of a large number of parameters with high precision. High level GAs search for few parameters which are much more sensitive to the system performance. Low level GAs search in more detail and employ a greater number of parameters for further optimization. Therefore, the complexity of the search is decreased and the computing resources are used more efficiently.
Algorithms for the optimization of RBE-weighted dose in particle therapy.
Horcicka, M; Meyer, C; Buschbacher, A; Durante, M; Krämer, M
2013-01-21
We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.
Algorithms for the optimization of RBE-weighted dose in particle therapy
NASA Astrophysics Data System (ADS)
Horcicka, M.; Meyer, C.; Buschbacher, A.; Durante, M.; Krämer, M.
2013-01-01
We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.
Optimum Design of High Speed Prop-Rotors
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi
1992-01-01
The objective of this research is to develop optimization procedures to provide design trends in high speed prop-rotors. The necessary disciplinary couplings are all considered within a closed loop optimization process. The procedures involve the consideration of blade aeroelastic, aerodynamic performance, structural and dynamic design requirements. Further, since the design involves consideration of several different objectives, multiobjective function formulation techniques are developed.
ERIC Educational Resources Information Center
Burns, Nicholas R.; Lee, Michael D.; Vickers, Douglas
2006-01-01
Studies of human problem solving have traditionally used deterministic tasks that require the execution of a systematic series of steps to reach a rational and optimal solution. Most real-world problems, however, are characterized by uncertainty, the need to consider an enormous number of variables and possible courses of action at each stage in…
Assessment and Verification of SLS Block 1-B Exploration Upper Stage and Stage Disposal Performance
NASA Technical Reports Server (NTRS)
Patrick, Sean; Oliver, T. Emerson; Anzalone, Evan J.
2018-01-01
Delta-v allocation to correct for insertion errors caused by state uncertainty is one of the key performance requirements imposed on the SLS Navigation System. Additionally, SLS mission requirements include the need for the Exploration Up-per Stage (EUS) to be disposed of successfully. To assess these requirements, the SLS navigation team has developed and implemented a series of analysis methods. Here the authors detail the Delta-Delta-V approach to assessing delta-v allocation as well as the EUS disposal optimization approach.
A fast optimization algorithm for multicriteria intensity modulated proton therapy planning.
Chen, Wei; Craft, David; Madden, Thomas M; Zhang, Kewu; Kooy, Hanne M; Herman, Gabor T
2010-09-01
To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK'S interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.
Multi-objective/loading optimization for rotating composite flexbeams
NASA Technical Reports Server (NTRS)
Hamilton, Brian K.; Peters, James R.
1989-01-01
With the evolution of advanced composites, the feasibility of designing bearingless rotor systems for high speed, demanding maneuver envelopes, and high aircraft gross weights has become a reality. These systems eliminate the need for hinges and heavily loaded bearings by incorporating a composite flexbeam structure which accommodates flapping, lead-lag, and feathering motions by bending and twisting while reacting full blade centrifugal force. The flight characteristics of a bearingless rotor system are largely dependent on hub design, and the principal element in this type of system is the composite flexbeam. As in any hub design, trade off studies must be performed in order to optimize performance, dynamics (stability), handling qualities, and stresses. However, since the flexbeam structure is the primary component which will determine the balance of these characteristics, its design and fabrication are not straightforward. It was concluded that: pitchcase and snubber damper representations are required in the flexbeam model for proper sizing resulting from dynamic requirements; optimization is necessary for flexbeam design, since it reduces the design iteration time and results in an improved design; and inclusion of multiple flight conditions and their corresponding fatigue allowables is necessary for the optimization procedure.
NASA Astrophysics Data System (ADS)
Hinze, J. F.; Klein, S. A.; Nellis, G. F.
2015-12-01
Mixed refrigerant (MR) working fluids can significantly increase the cooling capacity of a Joule-Thomson (JT) cycle. The optimization of MRJT systems has been the subject of substantial research. However, most optimization techniques do not model the recuperator in sufficient detail. For example, the recuperator is usually assumed to have a heat transfer coefficient that does not vary with the mixture. Ongoing work at the University of Wisconsin-Madison has shown that the heat transfer coefficients for two-phase flow are approximately three times greater than for a single phase mixture when the mixture quality is between 15% and 85%. As a result, a system that optimizes a MR without also requiring that the flow be in this quality range may require an extremely large recuperator or not achieve the performance predicted by the model. To ensure optimal performance of the JT cycle, the MR should be selected such that it is entirely two-phase within the recuperator. To determine the optimal MR composition, a parametric study was conducted assuming a thermodynamically ideal cycle. The results of the parametric study are graphically presented on a contour plot in the parameter space consisting of the extremes of the qualities that exist within the recuperator. The contours show constant values of the normalized refrigeration power. This ‘map’ shows the effect of MR composition on the cycle performance and it can be used to select the MR that provides a high cooling load while also constraining the recuperator to be two phase. The predicted best MR composition can be used as a starting point for experimentally determining the best MR.
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Mccarthy, Thomas R.; Madden, John F., III
1992-01-01
An optimization procedure is developed for the design of high speed prop-rotors to be used in civil tiltrotor applications. The goal is to couple aerodynamic performance, aeroelastic stability, and structural design requirements inside a closed-loop optimization procedure. The objective is to minimize the gross weight and maximize the propulsive efficiency in high speed cruise. Constraints are imposed on the rotor aeroelastic stability in both hover and cruise and rotor figure of merit in hover. Both structural and aerodynamic design variables are used.
Boom Minimization Framework for Supersonic Aircraft Using CFD Analysis
NASA Technical Reports Server (NTRS)
Ordaz, Irian; Rallabhandi, Sriram K.
2010-01-01
A new framework is presented for shape optimization using analytical shape functions and high-fidelity computational fluid dynamics (CFD) via Cart3D. The focus of the paper is the system-level integration of several key enabling analysis tools and automation methods to perform shape optimization and reduce sonic boom footprint. A boom mitigation case study subject to performance, stability and geometrical requirements is presented to demonstrate a subset of the capabilities of the framework. Lastly, a design space exploration is carried out to assess the key parameters and constraints driving the design.
Performance of arrays of direct-driven wave energy converters under optimal power take-off damping
NASA Astrophysics Data System (ADS)
Wang, Liguo; Engström, Jens; Leijon, Mats; Isberg, Jan
2016-08-01
It is well known that the total power converted by a wave energy farm is influenced by the hydrodynamic interactions between wave energy converters, especially when they are close to each other. Therefore, to improve the performance of a wave energy farm, the hydrodynamic interaction between converters must be considered, which can be influenced by the power take-off damping of individual converters. In this paper, the performance of arrays of wave energy converters under optimal hydrodynamic interaction and power take-off damping is investigated. This is achieved by coordinating the power take-off damping of individual converters, resulting in optimal hydrodynamic interaction as well as higher production of time-averaged power converted by the farm. Physical constraints on motion amplitudes are considered in the solution, which is required for the practical implementation of wave energy converters. Results indicate that the natural frequency of a wave energy converter under optimal damping will not vary with sea states, but the production performance of a wave energy farm can be improved significantly while satisfying the motion constraints.
NASA Technical Reports Server (NTRS)
Hahne, David E.; Glaab, Louis J.
1999-01-01
An investigation was performed to evaluate leading-and trailing-edge flap deflections for optimal aerodynamic performance of a High-Speed Civil Transport concept during takeoff and approach-to-landing conditions. The configuration used for this study was designed by the Douglas Aircraft Company during the 1970's. A 0.1-scale model of this configuration was tested in the Langley 30- by 60-Foot Tunnel with both the original leading-edge flap system and a new leading-edge flap system, which was designed with modem computational flow analysis and optimization tools. Leading-and trailing-edge flap deflections were generated for the original and modified leading-edge flap systems with the computational flow analysis and optimization tools. Although wind tunnel data indicated improvements in aerodynamic performance for the analytically derived flap deflections for both leading-edge flap systems, perturbations of the analytically derived leading-edge flap deflections yielded significant additional improvements in aerodynamic performance. In addition to the aerodynamic performance optimization testing, stability and control data were also obtained. An evaluation of the crosswind landing capability of the aircraft configuration revealed that insufficient lateral control existed as a result of high levels of lateral stability. Deflection of the leading-and trailing-edge flaps improved the crosswind landing capability of the vehicle considerably; however, additional improvements are required.
A Method to Determine Supply Voltage of Permanent Magnet Motor at Optimal Design Stage
NASA Astrophysics Data System (ADS)
Matustomo, Shinya; Noguchi, So; Yamashita, Hideo; Tanimoto, Shigeya
The permanent magnet motors (PM motors) are widely used in electrical machinery, such as air conditioner, refrigerator and so on. In recent years, from the point of view of energy saving, it is necessary to improve the efficiency of PM motor by optimization. However, in the efficiency optimization of PM motor, many design variables and many restrictions are required. In this paper, the efficiency optimization of PM motor with many design variables was performed by using the voltage driven finite element analysis with the rotating simulation of the motor and the genetic algorithm.
Performance Analysis and Design Synthesis (PADS) computer program. Volume 1: Formulation
NASA Technical Reports Server (NTRS)
1972-01-01
The program formulation for PADS computer program is presented. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module.
Optimization of the multi-turn injection efficiency for a medical synchrotron
NASA Astrophysics Data System (ADS)
Kim, J.; Yoon, M.; Yim, H.
2016-09-01
We present a method for optimizing the multi-turn injection efficiency for a medical synchrotron. We show that for a given injection energy, the injection efficiency can be greatly enhanced by choosing transverse tunes appropriately and by optimizing the injection bump and the number of turns required for beam injection. We verify our study by applying the method to the Korea Heavy Ion Medical Accelerator (KHIMA) synchrotron which is currently being built at the campus of Dongnam Institute of Radiological and Medical Sciences (DIRAMS) in Busan, Korea. First the frequency map analysis was performed with the help of the ELEGANT and the ACCSIM codes. The tunes that yielded good injection efficiency were then selected. With these tunes, the injection bump and the number of turns required for injection were then optimized by tracking a number of particles for up to one thousand turns after injection, beyond which no further beam loss occurred. Results for the optimization of the injection efficiency for proton ions are presented.
Model-Based Design of Tree WSNs for Decentralized Detection.
Tantawy, Ashraf; Koutsoukos, Xenofon; Biswas, Gautam
2015-08-20
The classical decentralized detection problem of finding the optimal decision rules at the sensor and fusion center, as well as variants that introduce physical channel impairments have been studied extensively in the literature. The deployment of WSNs in decentralized detection applications brings new challenges to the field. Protocols for different communication layers have to be co-designed to optimize the detection performance. In this paper, we consider the communication network design problem for a tree WSN. We pursue a system-level approach where a complete model for the system is developed that captures the interactions between different layers, as well as different sensor quality measures. For network optimization, we propose a hierarchical optimization algorithm that lends itself to the tree structure, requiring only local network information. The proposed design approach shows superior performance over several contentionless and contention-based network design approaches.
Optimization of PET instrumentation for brain activation studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahlbom, M.; Cherry, S.R.; Hoffman, E.J.
By performing cerebral blood flow studies with positron emission tomography (PET), and comparing blood flow images of different states of activation, functional mapping of the brain is possible. The ability of current commercial instruments to perform such studies is investigated in this work, based on a comparison of noise equivalent count (NEC) rates. Differences in the NEC performance of the different scanners in conjunction with scanner design parameters, provide insights into the importance of block design (size, dead time, crystal thickness) and overall scanner design (sensitivity and scatter fraction) for optimizing data from activation studies. The newer scanners with removablemore » septa, operating with 3-D acquisition, have much higher sensitivity, but require new methodology for optimized operation. Only by administering multiple low doses (fractionation) of the flow tracer can the high sensitivity be utilized.« less
NASA Technical Reports Server (NTRS)
Nguyen, Howard; Willacy, Karen; Allen, Mark
2012-01-01
KINETICS is a coupled dynamics and chemistry atmosphere model that is data intensive and computationally demanding. The potential performance gain from using a supercomputer motivates the adaptation from a serial version to a parallelized one. Although the initial parallelization had been done, bottlenecks caused by an abundance of communication calls between processors led to an unfavorable drop in performance. Before starting on the parallel optimization process, a partial overhaul was required because a large emphasis was placed on streamlining the code for user convenience and revising the program to accommodate the new supercomputers at Caltech and JPL. After the first round of optimizations, the partial runtime was reduced by a factor of 23; however, performance gains are dependent on the size of the data, the number of processors requested, and the computer used.
A hybrid approach to near-optimal launch vehicle guidance
NASA Technical Reports Server (NTRS)
Leung, Martin S. K.; Calise, Anthony J.
1992-01-01
This paper evaluates a proposed hybrid analytical/numerical approach to launch-vehicle guidance for ascent to orbit injection. The feedback-guidance approach is based on a piecewise nearly analytic zero-order solution evaluated using a collocation method. The zero-order solution is then improved through a regular perturbation analysis, wherein the neglected dynamics are corrected in the first-order term. For real-time implementation, the guidance approach requires solving a set of small dimension nonlinear algebraic equations and performing quadrature. Assessment of performance and reliability are carried out through closed-loop simulation for a vertically launched 2-stage heavy-lift capacity vehicle to a low earth orbit. The solutions are compared with optimal solutions generated from a multiple shooting code. In the example the guidance approach delivers over 99.9 percent of optimal performance and terminal constraint accuracy.
High speed civil transport aerodynamic optimization
NASA Technical Reports Server (NTRS)
Ryan, James S.
1994-01-01
This is a report of work in support of the Computational Aerosciences (CAS) element of the Federal HPCC program. Specifically, CFD and aerodynamic optimization are being performed on parallel computers. The long-range goal of this work is to facilitate teraflops-rate multidisciplinary optimization of aerospace vehicles. This year's work is targeted for application to the High Speed Civil Transport (HSCT), one of four CAS grand challenges identified in the HPCC FY 1995 Blue Book. This vehicle is to be a passenger aircraft, with the promise of cutting overseas flight time by more than half. To meet fuel economy, operational costs, environmental impact, noise production, and range requirements, improved design tools are required, and these tools must eventually integrate optimization, external aerodynamics, propulsion, structures, heat transfer, controls, and perhaps other disciplines. The fundamental goal of this project is to contribute to improved design tools for U.S. industry, and thus to the nation's economic competitiveness.
NASA Technical Reports Server (NTRS)
Patniak, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.
1998-01-01
Nonlinear mathematical-programming-based design optimization can be an elegant method. However, the calculations required to generate the merit function, constraints, and their gradients, which are frequently required, can make the process computational intensive. The computational burden can be greatly reduced by using approximating analyzers derived from an original analyzer utilizing neural networks and linear regression methods. The experience gained from using both of these approximation methods in the design optimization of a high speed civil transport aircraft is the subject of this paper. The Langley Research Center's Flight Optimization System was selected for the aircraft analysis. This software was exercised to generate a set of training data with which a neural network and a regression method were trained, thereby producing the two approximating analyzers. The derived analyzers were coupled to the Lewis Research Center's CometBoards test bed to provide the optimization capability. With the combined software, both approximation methods were examined for use in aircraft design optimization, and both performed satisfactorily. The CPU time for solution of the problem, which had been measured in hours, was reduced to minutes with the neural network approximation and to seconds with the regression method. Instability encountered in the aircraft analysis software at certain design points was also eliminated. On the other hand, there were costs and difficulties associated with training the approximating analyzers. The CPU time required to generate the input-output pairs and to train the approximating analyzers was seven times that required for solution of the problem.
The optimization of concrete mixtures for use in highway applications
NASA Astrophysics Data System (ADS)
Moini, Mohamadreza
Portland cement concrete is most used commodity in the world after water. Major part of civil and transportation infrastructure including bridges, roadway pavements, dams, and buildings is made of concrete. In addition to this, concrete durability is often of major concerns. In 2013 American Society of Civil Engineers (ASCE) estimated that an annual investment of 170 billion on roads and 20.5 billion for bridges is needed on an annual basis to substantially improve the condition of infrastructure. Same article reports that one-third of America's major roads are in poor or mediocre condition [1]. However, portland cement production is recognized with approximately one cubic meter of carbon dioxide emission. Indeed, the proper and systematic design of concrete mixtures for highway applications is essential as concrete pavements represent up to 60% of interstate highway systems with heavier traffic loads. Combined principles of material science and engineering can provide adequate methods and tools to facilitate the concrete design and improve the existing specifications. In the same manner, the durability must be addressed in the design and enhancement of long-term performance. Concrete used for highway pavement applications has low cement content and can be placed at low slump. However, further reduction of cement content (e.g., versus current specifications of Wisconsin Department of Transportation to 315-338 kg/m 3 (530-570 lb/yd3) for mainstream concrete pavements and 335 kg/m3 (565 lb/yd3) for bridge substructure and superstructures) requires delicate design of the mixture to maintain the expected workability, overall performance, and long-term durability in the field. The design includes, but not limited to optimization of aggregates, supplementary cementitious materials (SCMs), chemical and air-entraining admixtures. This research investigated various theoretical and experimental methods of aggregate optimization applicable for the reduction of cement content. Conducted research enabled further reduction of cement contents to 250 kg/m3 (420 lb/yd3) as required for the design of sustainable concrete pavements. This research demonstrated that aggregate packing can be used in multiple ways as a tool to optimize the aggregates assemblies and achieve the optimal particle size distribution of aggregate blends. The SCMs, and air-entraining admixtures were selected to comply with existing WisDOT performance requirements and chemical admixtures were selected using the separate optimization study excluded from this thesis. The performance of different concrete mixtures was evaluated for fresh properties, strength development, and compressive and flexural strength ranging from 1 to 360 days. The methods and tools discussed in this research are applicable, but not limited to concrete pavement applications. The current concrete proportioning standards such as ACI 211 or current WisDOT roadway standard specifications (Part 5: Structures, Section 501: Concrete) for concrete have limited or no recommendations, methods or guidelines on aggregate optimization, the use of ternary aggregate blends (e.g., such as those used in asphalt industry), the optimization of SCMs (e.g., class F and C fly ash, slag, metakaolin, silica fume), modern superplasticizers (such as polycarboxylate ether, PCE) and air-entraining admixtures. This research has demonstrated that the optimization of concrete mixture proportions can be achieved by the use and proper selection of optimal aggregate blends and result in 12% to 35% reduction of cement content and also more than 50% enhancement of performance. To prove the proposed concrete proportioning method the following steps were performed: • The experimental aggregate packing was investigated using northern and southern source of aggregates from Wisconsin; • The theoretical aggregate packing models were utilized and results were compared with experiments; • Multiple aggregate optimization methods (e.g., optimal grading, coarseness chart) were studied and compared to aggregate packing results and performance of experimented concrete mixtures; • Optimal aggregate blends were selected and used for concrete mixtures; • The optimal dosage of admixtures were selected for three types of plasticizing and superplasticizing admixtures based on a separately conducted study; • The SCM dosages were selected based on current WisDOT specifications; • The optimal air-entraining admixture dosage was investigated based on performance of preliminary concrete mixtures; • Finally, optimal concrete mixtures were tested for fresh properties, compressive strength development, modulus of rupture, at early ages (1day) and ultimate ages (360 days). • Durability performance indicators for optimal concrete mixtures were also tested for resistance of concrete to rapid chloride permeability (RCP) at 30 days and 90 days and resistance to rapid freezing and thawing at 56 days.
A system level model for preliminary design of a space propulsion solid rocket motor
NASA Astrophysics Data System (ADS)
Schumacher, Daniel M.
Preliminary design of space propulsion solid rocket motors entails a combination of components and subsystems. Expert design tools exist to find near optimal performance of subsystems and components. Conversely, there is no system level preliminary design process for space propulsion solid rocket motors that is capable of synthesizing customer requirements into a high utility design for the customer. The preliminary design process for space propulsion solid rocket motors typically builds on existing designs and pursues feasible rather than the most favorable design. Classical optimization is an extremely challenging method when dealing with the complex behavior of an integrated system. The complexity and combinations of system configurations make the number of the design parameters that are traded off unreasonable when manual techniques are used. Existing multi-disciplinary optimization approaches generally address estimating ratios and correlations rather than utilizing mathematical models. The developed system level model utilizes the Genetic Algorithm to perform the necessary population searches to efficiently replace the human iterations required during a typical solid rocket motor preliminary design. This research augments, automates, and increases the fidelity of the existing preliminary design process for space propulsion solid rocket motors. The system level aspect of this preliminary design process, and the ability to synthesize space propulsion solid rocket motor requirements into a near optimal design, is achievable. The process of developing the motor performance estimate and the system level model of a space propulsion solid rocket motor is described in detail. The results of this research indicate that the model is valid for use and able to manage a very large number of variable inputs and constraints towards the pursuit of the best possible design.
NASA Astrophysics Data System (ADS)
Bencherif, H.; Djeffal, F.; Ferhati, H.
2016-09-01
This paper presents a hybrid approach based on an analytical and metaheuristic investigation to study the impact of the interdigitated electrodes engineering on both speed and optical performance of an Interdigitated Metal-Semiconductor-Metal Ultraviolet Photodetector (IMSM-UV-PD). In this context, analytical models regarding the speed and optical performance have been developed and validated by experimental results, where a good agreement has been recorded. Moreover, the developed analytical models have been used as objective functions to determine the optimized design parameters, including the interdigit configuration effect, via a Multi-Objective Genetic Algorithm (MOGA). The ultimate goal of the proposed hybrid approach is to identify the optimal design parameters associated with the maximum of electrical and optical device performance. The optimized IMSM-PD not only reveals superior performance in terms of photocurrent and response time, but also illustrates higher optical reliability against the optical losses due to the active area shadowing effects. The advantages offered by the proposed design methodology suggest the possibility to overcome the most challenging problem with the communication speed and power requirements of the UV optical interconnect: high derived current and commutation speed in the UV receiver.
COMSATCOM service technical baseline strategy development approach using PPBW concept
NASA Astrophysics Data System (ADS)
Nguyen, Tien M.; Guillen, Andy T.
2016-05-01
This paper presents an innovative approach to develop a Commercial Satellite Communications (COMSATCOM) service Technical Baseline (TB) and associated Program Baseline (PB) strategy using Portable Pool Bandwidth (PPBW) concept. The concept involves trading of the purchased commercial transponders' Bandwidths (BWs) with existing commercial satellites' bandwidths participated in a "designated pool bandwidth"3 according to agreed terms and conditions. Space Missile Systems Center (SMC) has been implementing the Better Buying Power (BBP 3.0) directive4 and recommending the System Program Offices (SPO) to own the Program and Technical Baseline (PTB) [1, 2] for the development of flexible acquisition strategy and achieving affordability and increased in competition. This paper defines and describes the critical PTB parameters and associated requirements that are important to the government SPO for "owning" an affordable COMSATCOM services contract using PPBW trading concept. The paper describes a step-by-step approach to optimally perform the PPBW trading to meet DoD and its stakeholders (i) affordability requirement, and (ii) fixed and variable bandwidth requirements by optimizing communications performance, cost and PPBW accessibility in terms of Quality of Services (QoS), Bandwidth Sharing Ratio (BSR), Committed Information Rate (CIR), Burstable Information Rate (BIR), Transponder equivalent bandwidth (TPE) and transponder Net Presence Value (NPV). The affordable optimal solution that meets variable bandwidth requirements will consider the operating and trading terms and conditions described in the Fair Access Policy (FAP).
NASA Astrophysics Data System (ADS)
Pravdivtsev, Andrey V.
2012-06-01
The article presents the approach to the design wide-angle optical systems with special illumination and instantaneous field of view (IFOV) requirements. The unevenness of illumination reduces the dynamic range of the system, which negatively influence on the system ability to perform their task. The result illumination on the detector depends among other factors from the IFOV changes. It is also necessary to consider IFOV in the synthesis of data processing algorithms, as it directly affects to the potential "signal/background" ratio for the case of statistically homogeneous backgrounds. A numerical-analytical approach that simplifies the design of wideangle optical systems with special illumination and IFOV requirements is presented. The solution can be used for optical systems which field of view greater than 180 degrees. Illumination calculation in optical CAD is based on computationally expensive tracing of large number of rays. The author proposes to use analytical expression for some characteristics which illumination depends on. The rest characteristic are determined numerically in calculation with less computationally expensive operands, the calculation performs not every optimization step. The results of analytical calculation inserts in the merit function of optical CAD optimizer. As a result we reduce the optimizer load, since using less computationally expensive operands. It allows reducing time and resources required to develop a system with the desired characteristics. The proposed approach simplifies the creation and understanding of the requirements for the quality of the optical system, reduces the time and resources required to develop an optical system, and allows creating more efficient EOS.
NASA Astrophysics Data System (ADS)
Oh, Sahuck; Jiang, Chung-Hsiang; Jiang, Chiyu; Marcus, Philip S.
2017-10-01
We present a new, general design method, called design-by-morphing for an object whose performance is determined by its shape due to hydrodynamic, aerodynamic, structural, or thermal requirements. To illustrate the method, we design a new leading-and-trailing car of a train by morphing existing, baseline leading-and-trailing cars to minimize the drag. In design-by-morphing, the morphing is done by representing the shapes with polygonal meshes and spectrally with a truncated series of spherical harmonics. The optimal design is found by computing the optimal weights of each of the baseline shapes so that the morphed shape has minimum drag. As a result of optimization, we found that with only two baseline trains that mimic current high-speed trains with low drag that the drag of the optimal train is reduced by 8.04% with respect to the baseline train with the smaller drag. When we repeat the optimization by adding a third baseline train that under-performs compared to the other baseline train, the drag of the new optimal train is reduced by 13.46% . This finding shows that bad examples of design are as useful as good examples in determining an optimal design. We show that design-by-morphing can be extended to many engineering problems in which the performance of an object depends on its shape.
NASA Astrophysics Data System (ADS)
Oh, Sahuck; Jiang, Chung-Hsiang; Jiang, Chiyu; Marcus, Philip S.
2018-07-01
We present a new, general design method, called design-by-morphing for an object whose performance is determined by its shape due to hydrodynamic, aerodynamic, structural, or thermal requirements. To illustrate the method, we design a new leading-and-trailing car of a train by morphing existing, baseline leading-and-trailing cars to minimize the drag. In design-by-morphing, the morphing is done by representing the shapes with polygonal meshes and spectrally with a truncated series of spherical harmonics. The optimal design is found by computing the optimal weights of each of the baseline shapes so that the morphed shape has minimum drag. As a result of optimization, we found that with only two baseline trains that mimic current high-speed trains with low drag that the drag of the optimal train is reduced by 8.04% with respect to the baseline train with the smaller drag. When we repeat the optimization by adding a third baseline train that under-performs compared to the other baseline train, the drag of the new optimal train is reduced by 13.46%. This finding shows that bad examples of design are as useful as good examples in determining an optimal design. We show that design-by-morphing can be extended to many engineering problems in which the performance of an object depends on its shape.
Geomagnetic field modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
1980-01-01
Data sets selected for mini-batches and the software modifications required for processing these sets are described. Initial analysis was performed on minibatch field model recovery. Studies are being performed to examine the convergence of the solutions and the maximum expansion order the data will support in the constant and secular terms.
Display/control requirements for VTOL aircraft
NASA Technical Reports Server (NTRS)
Hoffman, W. C.; Curry, R. E.; Kleinman, D. L.; Hollister, W. M.; Young, L. R.
1975-01-01
Quantative metrics were determined for system control performance, workload for control, monitoring performance, and workload for monitoring. Pilot tasks were allocated for navigation and guidance of automated commercial V/STOL aircraft in all weather conditions using an optimal control model of the human operator to determine display elements and design.
IMIS: Integrated Maintenance Information System. A maintenance information delivery concept
NASA Technical Reports Server (NTRS)
Vonholle, Joseph C.
1987-01-01
The Integrated Maintenance Information System (IMIS) will optimize the use of available manpower, enhance technical performance, improve training, and reduce the support equipment and documentation needed for deployment. It will serve as the technician's single, integrated source of all the technical information required to perform modern aircraft maintenance.
Optimization of HTS superconducting magnetic energy storage magnet volume
NASA Astrophysics Data System (ADS)
Korpela, Aki; Lehtonen, Jorma; Mikkonen, Risto
2003-08-01
Nonlinear optimization problems in the field of electromagnetics have been successfully solved by means of sequential quadratic programming (SQP) and the finite element method (FEM). For example, the combination of SQP and FEM has been proven to be an efficient tool in the optimization of low temperature superconductors (LTS) superconducting magnetic energy storage (SMES) magnets. The procedure can also be applied for the optimization of HTS magnets. However, due to a strongly anisotropic material and a slanted electric field, current density characteristic high temperature superconductors HTS optimization is quite different from that of the LTS. In this paper the volumes of solenoidal conduction-cooled Bi-2223/Ag SMES magnets have been optimized at the operation temperature of 20 K. In addition to the electromagnetic constraints the stress caused by the tape bending has also been taken into account. Several optimization runs with different initial geometries were performed in order to find the best possible solution for a certain energy requirement. The optimization constraints describe the steady-state operation, thus the presented coil geometries are designed for slow ramping rates. Different energy requirements were investigated in order to find the energy dependence of the design parameters of optimized solenoidal HTS coils. According to the results, these dependences can be described with polynomial expressions.
Design and performance of optimal detectors for guided wave structural health monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dib, G.; Udpa, L.
2016-01-01
Ultrasonic guided wave measurements in a long term structural health monitoring system are affected by measurement noise, environmental conditions, transducer aging and malfunction. This results in measurement variability which affects detection performance, especially in complex structures where baseline data comparison is required. This paper derives the optimal detector structure, within the framework of detection theory, where a guided wave signal at the sensor is represented by a single feature value that can be used for comparison with a threshold. Three different types of detectors are derived depending on the underlying structure’s complexity: (i) Simple structures where defect reflections can bemore » identified without the need for baseline data; (ii) Simple structures that require baseline data due to overlap of defect scatter with scatter from structural features; (iii) Complex structure with dense structural features that require baseline data. The detectors are derived by modeling the effects of variabilities and uncertainties as random processes. Analytical solutions for the performance of detectors in terms of the probability of detection and false alarm are derived. A finite element model is used to generate guided wave signals and the performance results of a Monte-Carlo simulation are compared with the theoretical performance. initial results demonstrate that the problems of signal complexity and environmental variability can in fact be exploited to improve detection performance.« less
Performance of local optimization in single-plane fluoroscopic analysis for total knee arthroplasty.
Prins, A H; Kaptein, B L; Stoel, B C; Lahaye, D J P; Valstar, E R
2015-11-05
Fluoroscopy-derived joint kinematics plays an important role in the evaluation of knee prostheses. Fluoroscopic analysis requires estimation of the 3D prosthesis pose from its 2D silhouette in the fluoroscopic image, by optimizing a dissimilarity measure. Currently, extensive user-interaction is needed, which makes analysis labor-intensive and operator-dependent. The aim of this study was to review five optimization methods for 3D pose estimation and to assess their performance in finding the correct solution. Two derivative-free optimizers (DHSAnn and IIPM) and three gradient-based optimizers (LevMar, DoNLP2 and IpOpt) were evaluated. For the latter three optimizers two different implementations were evaluated: one with a numerically approximated gradient and one with an analytically derived gradient for computational efficiency. On phantom data, all methods were able to find the 3D pose within 1mm and 1° in more than 85% of cases. IpOpt had the highest success-rate: 97%. On clinical data, the success rates were higher than 85% for the in-plane positions, but not for the rotations. IpOpt was the most expensive method and the application of an analytically derived gradients accelerated the gradient-based methods by a factor 3-4 without any differences in success rate. In conclusion, 85% of the frames can be analyzed automatically in clinical data and only 15% of the frames require manual supervision. The optimal success-rate on phantom data (97% with IpOpt) on phantom data indicates that even less supervision may become feasible. Copyright © 2015 Elsevier Ltd. All rights reserved.
Weight optimal design of lateral wing upper covers made of composite materials
NASA Astrophysics Data System (ADS)
Barkanov, Evgeny; Eglītis, Edgars; Almeida, Filipe; Bowering, Mark C.; Watson, Glenn
2016-09-01
The present investigation is devoted to the development of a new optimal design of lateral wing upper covers made of advanced composite materials, with special emphasis on closer conformity of the developed finite element analysis and operational requirements for aircraft wing panels. In the first stage, 24 weight optimization problems based on linear buckling analysis were solved for the laminated composite panels with three types of stiffener, two stiffener pitches and four load levels, taking into account manufacturing, reparability and damage tolerance requirements. In the second stage, a composite panel with the best weight/design performance from the previous study was verified by nonlinear buckling analysis and optimization to investigate the effect of shear and fuel pressure on the performance of stiffened panels, and their behaviour under skin post-buckling. Three rib-bay laminated composite panels with T-, I- and HAT-stiffeners were modelled with ANSYS, NASTRAN and ABAQUS finite element codes to study their buckling behaviour as a function of skin and stiffener lay-ups, stiffener height, stiffener top and root width. Owing to the large dimension of numerical problems to be solved, an optimization methodology was developed employing the method of experimental design and response surface technique. Optimal results obtained in terms of cross-sectional areas were verified successfully using ANSYS and ABAQUS shared-node models and a NASTRAN rigid-linked model, and were used later to estimate the weight of the Advanced Low Cost Aircraft Structures (ALCAS) lateral wing upper cover.
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2015-10-01
The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.
Automation of POST Cases via External Optimizer and "Artificial p2" Calculation
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Mathew R.
2017-01-01
During early conceptual design of complex systems, speed and accuracy are often at odds with one another. While many characteristics of the design are fluctuating rapidly during this phase there is nonetheless a need to acquire accurate data from which to down-select designs as these decisions will have a large impact upon program life-cycle cost. Therefore enabling the conceptual designer to produce accurate data in a timely manner is tantamount to program viability. For conceptual design of launch vehicles, trajectory analysis and optimization is a large hurdle. Tools such as the industry standard Program to Optimize Simulated Trajectories (POST) have traditionally required an expert in the loop for setting up inputs, running the program, and analyzing the output. The solution space for trajectory analysis is in general non-linear and multi-modal requiring an experienced analyst to weed out sub-optimal designs in pursuit of the global optimum. While an experienced analyst presented with a vehicle similar to one which they have already worked on can likely produce optimal performance figures in a timely manner, as soon as the "experienced" or "similar" adjectives are invalid the process can become lengthy. In addition, an experienced analyst working on a similar vehicle may go into the analysis with preconceived ideas about what the vehicle's trajectory should look like which can result in sub-optimal performance being recorded. Thus, in any case but the ideal either time or accuracy can be sacrificed. In the authors' previous work a tool called multiPOST was created which captures the heuristics of a human analyst over the process of executing trajectory analysis with POST. However without the instincts of a human in the loop, this method relied upon Monte Carlo simulation to find successful trajectories. Overall the method has mixed results, and in the context of optimizing multiple vehicles it is inefficient in comparison to the method presented POST's internal optimizer functions like any other gradient-based optimizer. It has a specified variable to optimize whose value is represented as optval, a set of dependent constraints to meet with associated forms and tolerances whose value is represented as p2, and a set of independent variables known as the u-vector to modify in pursuit of optimality. Each of these quantities are calculated or manipulated at a certain phase within the trajectory. The optimizer is further constrained by the requirement that the input u-vector must result in a trajectory which proceeds through each of the prescribed events in the input file. For example, if the input u-vector causes the vehicle to crash before it can achieve the orbital parameters required for a parking orbit, then the run will fail without engaging the optimizer, and a p2 value of exactly zero is returned. This poses a problem, as this "non-connecting" region of the u-vector space is far larger than the "connecting" region which returns a non-zero value of p2 and can be worked on by the internal optimizer. Finding this connecting region and more specifically the global optimum within this region has traditionally required the use of an expert analyst.
A Framework for Dimensioning VDL-2 Air-Ground Networks
NASA Technical Reports Server (NTRS)
Ribeiro, Leila Z.; Monticone, Leone C.; Snow, Richard E.; Box, Frank; Apaza, Rafel; Bretmersky, Steven
2014-01-01
This paper describes a framework developed at MITRE for dimensioning a Very High Frequency (VHF) Digital Link Mode 2 (VDL-2) Air-to-Ground network. This framework was developed to support the FAA's Data Communications (Data Comm) program by providing estimates of expected capacity required for the air-ground network services that will support Controller-Pilot-Data-Link Communications (CPDLC), as well as the spectrum needed to operate the system at required levels of performance. The Data Comm program is part of the FAA's NextGen initiative to implement advanced communication capabilities in the National Airspace System (NAS). The first component of the framework is the radio-frequency (RF) coverage design for the network ground stations. Then we proceed to describe the approach used to assess the aircraft geographical distribution and the data traffic demand expected in the network. The next step is the resource allocation utilizing optimization algorithms developed in MITRE's Spectrum ProspectorTM tool to propose frequency assignment solutions, and a NASA-developed VDL-2 tool to perform simulations and determine whether a proposed plan meets the desired performance requirements. The framework presented is capable of providing quantitative estimates of multiple variables related to the air-ground network, in order to satisfy established coverage, capacity and latency performance requirements. Outputs include: coverage provided at different altitudes; data capacity required in the network, aggregated or on a per ground station basis; spectrum (pool of frequencies) needed for the system to meet a target performance; optimized frequency plan for a given scenario; expected performance given spectrum available; and, estimates of throughput distributions for a given scenario. We conclude with a discussion aimed at providing insight into the tradeoffs and challenges identified with respect to radio resource management for VDL-2 air-ground networks.
Physical and energy requirements of competitive swimming events.
Pyne, David B; Sharp, Rick L
2014-08-01
The aquatic sports competitions held during the summer Olympic Games include diving, open-water swimming, pool swimming, synchronized swimming, and water polo. Elite-level performance in each of these sports requires rigorous training and practice to develop the appropriate physiological, biomechanical, artistic, and strategic capabilities specific to each sport. Consequently, the daily training plans of these athletes are quite varied both between and within the sports. Common to all aquatic athletes, however, is that daily training and preparation consumes several hours and involves frequent periods of high-intensity exertion. Nutritional support for this high-level training is a critical element of the preparation of these athletes to ensure the energy and nutrient demands of the training and competition are met. In this article, we introduce the fundamental physical requirements of these sports and specifically explore the energetics of human locomotion in water. Subsequent articles in this issue explore the specific nutritional requirements of each aquatic sport. We hope that such exploration will provide a foundation for future investigation of the roles of optimal nutrition in optimizing performance in the aquatic sports.
Optimal design application on the advanced aeroelastic rotor blade
NASA Technical Reports Server (NTRS)
Wei, F. S.; Jones, R.
1985-01-01
The vibration and performance optimization procedure using regression analysis was successfully applied to an advanced aeroelastic blade design study. The major advantage of this regression technique is that multiple optimizations can be performed to evaluate the effects of various objective functions and constraint functions. The data bases obtained from the rotorcraft flight simulation program C81 and Myklestad mode shape program are analytically determined as a function of each design variable. This approach has been verified for various blade radial ballast weight locations and blade planforms. This method can also be utilized to ascertain the effect of a particular cost function which is composed of several objective functions with different weighting factors for various mission requirements without any additional effort.
Difficulty of distinguishing product states locally
NASA Astrophysics Data System (ADS)
Croke, Sarah; Barnett, Stephen M.
2017-01-01
Nonlocality without entanglement is a rather counterintuitive phenomenon in which information may be encoded entirely in product (unentangled) states of composite quantum systems in such a way that local measurement of the subsystems is not enough for optimal decoding. For simple examples of pure product states, the gap in performance is known to be rather small when arbitrary local strategies are allowed. Here we restrict to local strategies readily achievable with current technology: those requiring neither a quantum memory nor joint operations. We show that even for measurements on pure product states, there can be a large gap between such strategies and theoretically optimal performance. Thus, even in the absence of entanglement, physically realizable local strategies can be far from optimal for extracting quantum information.
Detailed design of a lattice composite fuselage structure by a mixed optimization method
NASA Astrophysics Data System (ADS)
Liu, D.; Lohse-Busch, H.; Toropov, V.; Hühne, C.; Armani, U.
2016-10-01
In this article, a procedure for designing a lattice fuselage barrel is developed. It comprises three stages: first, topology optimization of an aircraft fuselage barrel is performed with respect to weight and structural performance to obtain the conceptual design. The interpretation of the optimal result is given to demonstrate the development of this new lattice airframe concept for the fuselage barrel. Subsequently, parametric optimization of the lattice aircraft fuselage barrel is carried out using genetic algorithms on metamodels generated with genetic programming from a 101-point optimal Latin hypercube design of experiments. The optimal design is achieved in terms of weight savings subject to stability, global stiffness and strain requirements, and then verified by the fine mesh finite element simulation of the lattice fuselage barrel. Finally, a practical design of the composite skin complying with the aircraft industry lay-up rules is presented. It is concluded that the mixed optimization method, combining topology optimization with the global metamodel-based approach, allows the problem to be solved with sufficient accuracy and provides the designers with a wealth of information on the structural behaviour of the novel anisogrid composite fuselage design.
Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz
2008-02-01
A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.
Compressing Aviation Data in XML Format
NASA Technical Reports Server (NTRS)
Patel, Hemil; Lau, Derek; Kulkarni, Deepak
2003-01-01
Design, operations and maintenance activities in aviation involve analysis of variety of aviation data. This data is typically in disparate formats making it difficult to use with different software packages. Use of a self-describing and extensible standard called XML provides a solution to this interoperability problem. XML provides a standardized language for describing the contents of an information stream, performing the same kind of definitional role for Web content as a database schema performs for relational databases. XML data can be easily customized for display using Extensible Style Sheets (XSL). While self-describing nature of XML makes it easy to reuse, it also increases the size of data significantly. Therefore, transfemng a dataset in XML form can decrease throughput and increase data transfer time significantly. It also increases storage requirements significantly. A natural solution to the problem is to compress the data using suitable algorithm and transfer it in the compressed form. We found that XML-specific compressors such as Xmill and XMLPPM generally outperform traditional compressors. However, optimal use of Xmill requires of discovery of optimal options to use while running Xmill. This, in turn, depends on the nature of data used. Manual disc0ver.y of optimal setting can require an engineer to experiment for weeks. We have devised an XML compression advisory tool that can analyze sample data files and recommend what compression tool would work the best for this data and what are the optimal settings to be used with a XML compression tool.
NASA Technical Reports Server (NTRS)
Logston, R. G.; Budris, G. D.
1977-01-01
The methodology to optimize the utilization of Spacelab racks and pallets and to apply this methodology to the early STS Spacelab missions was developed. A review was made of Spacelab Program requirements and flow plans, generic flow plans for racks and pallets were examined, and the principal optimization criteria and methodology were established. Interactions between schedule, inventory, and key optimization factors; schedule and cost sensitivity to optional approaches; and the development of tradeoff methodology were addressed. This methodology was then applied to early spacelab missions (1980-1982). Rack and pallet requirements and duty cycles were defined, a utilization assessment was made, and several trade studies performed involving varying degrees of Level IV integration, inventory level, and shared versus dedicated Spacelab racks and pallets.
Method of Optimizing the Construction of Machining, Assembly and Control Devices
NASA Astrophysics Data System (ADS)
Iordache, D. M.; Costea, A.; Niţu, E. L.; Rizea, A. D.; Babă, A.
2017-10-01
Industry dynamics, driven by economic and social requirements, must generate more interest in technological optimization, capable of ensuring a steady development of advanced technical means to equip machining processes. For these reasons, the development of tools, devices, work equipment and control, as well as the modernization of machine tools, is the certain solution to modernize production systems that require considerable time and effort. This type of approach is also related to our theoretical, experimental and industrial applications of recent years, presented in this paper, which have as main objectives the elaboration and use of mathematical models, new calculation methods, optimization algorithms, new processing and control methods, as well as some structures for the construction and configuration of technological equipment with a high level of performance and substantially reduced costs..
NASA Astrophysics Data System (ADS)
Govindaraju, Parithi
Determining the optimal requirements for and design variable values of new systems, which operate along with existing systems to provide a set of overarching capabilities, as a single task is challenging due to the highly interconnected effects that setting requirements on a new system's design can have on how an operator uses this newly designed system. This task of determining the requirements and the design variable values becomes even more difficult because of the presence of uncertainties in the new system design and in the operational environment. This research proposed and investigated aspects of a framework that generates optimum design requirements of new, yet-to-be-designed systems that, when operating alongside other systems, will optimize fleet-level objectives while considering the effects of various uncertainties. Specifically, this research effort addresses the issues of uncertainty in the design of the new system through reliability-based design optimization methods, and uncertainty in the operations of the fleet through descriptive sampling methods and robust optimization formulations. In this context, fleet-level performance metrics result from using the new system alongside other systems to accomplish an overarching objective or mission. This approach treats the design requirements of a new system as decision variables in an optimization problem formulation that a user in the position of making an acquisition decision could solve. This solution would indicate the best new system requirements-and an associated description of the best possible design variable variables for that new system-to optimize the fleet level performance metric(s). Using a problem motivated by recorded operations of the United States Air Force Air Mobility Command for illustration, the approach is demonstrated first for a simplified problem that only considers demand uncertainties in the service network and the proposed methodology is used to identify the optimal design requirements and optimal aircraft sizing variables of new, yet-to-be-introduced aircraft. With this new aircraft serving alongside other existing aircraft, the fleet of aircraft satisfy the desired demand for cargo transportation, while maximizing fleet productivity and minimizing fuel consumption via a multi-objective problem formulation. The approach is then extended to handle uncertainties in both the design of the new system and in the operations of the fleet. The propagation of uncertainties associated with the conceptual design of the new aircraft to the uncertainties associated with the subsequent operations of the new and existing aircraft in the fleet presents some unique challenges. A computationally tractable hybrid robust counterpart formulation efficiently handles the confluence of the two types of domain-specific uncertainties. This hybrid formulation is tested on a larger route network problem to demonstrate the scalability of the approach. Following the presentation of the results obtained, a summary discussion indicates how decision-makers might use these results to set requirements for new aircraft that meet operational needs while balancing the environmental impact of the fleet with fleet-level performance. Comparing the solutions from the uncertainty-based and deterministic formulations via a posteriori analysis demonstrates the efficacy of the robust and reliability-based optimization formulations in addressing the different domain-specific uncertainties. Results suggest that the aircraft design requirements and design description determined through the hybrid robust counterpart formulation approach differ from solutions obtained from the simplistic deterministic approach, and leads to greater fleet-level fuel savings, when subjected to real-world uncertain scenarios (more robust to uncertainty). The research, though applied to a specific air cargo application, is technically agnostic in nature and can be applied to other facets of policy and acquisition management, to explore capability trade spaces for different vehicle systems, mitigate risks, define policy and potentially generate better returns on investment. Other domains relevant to policy and acquisition decisions could utilize the problem formulation and solution approach proposed in this dissertation provided that the problem can be split into a non-linear programming problem to describe the new system sizing and the fleet operations problem can be posed as a linear/integer programming problem.
A Multidisciplinary Performance Analysis of a Lifting-Body Single-Stage-to-Orbit Vehicle
NASA Technical Reports Server (NTRS)
Tartabini, Paul V.; Lepsch, Roger A.; Korte, J. J.; Wurster, Kathryn E.
2000-01-01
Lockheed Martin Skunk Works (LMSW) is currently developing a single-stage-to-orbit reusable launch vehicle called VentureStar(TM) A team at NASA Langley Research Center participated with LMSW in the screening and evaluation of a number of early VentureStar(TM) configurations. The performance analyses that supported these initial studies were conducted to assess the effect of a lifting body shape, linear aerospike engine and metallic thermal protection system (TPS) on the weight and performance of the vehicle. These performance studies were performed in a multidisciplinary fashion that indirectly linked the trajectory optimization with weight estimation and aerothermal analysis tools. This approach was necessary to develop optimized ascent and entry trajectories that met all vehicle design constraints. Significant improvements in ascent performance were achieved when the vehicle flew a lifting trajectory and varied the engine mixture ratio during flight. Also, a considerable reduction in empty weight was possible by adjusting the total oxidizer-to-fuel and liftoff thrust-to-weight ratios. However, the optimal ascent flight profile had to be altered to ensure that the vehicle could be trimmed in pitch using only the flow diverting capability of the aerospike engine. Likewise, the optimal entry trajectory had to be tailored to meet TPS heating rate and transition constraints while satisfying a crossrange requirement.
Wortmann, Birgit; Knorr, Jürgen
2012-08-01
In 2001 and 2003, at the University of Pavia, Italy, boron neutron capture therapy (BNCT) has been successfully used in the treatment of hepatic colorectal metastases (Pinelli et al., 2002; Zonta et al., 2006). The treatment procedure (TAOrMINA protocol) is characterised by the auto-transplantation and extracorporeal irradiation of the liver using a thermal neutron beam. The clinical use of this approach requires well founded data and an optimized irradiation facility. In order to start with this work and to decide upon its feasibility at the research reactor TRIGA Mainz, basic data and requirements have been considered (Wortmann, 2008). Computer calculations using the ATTILA (Transpire Inc. 2006) and MCNP (LANL, 2005) codes have been performed, including data from conventional radiation therapy, from the TAOrMINA approach, resulting in reasonable estimations. Basic data and requirements and optimal parameters have been worked out, especially for use at an optimized TRIGA irradiation facility (Wortmann, 2008). Advantages of the extracorporeal irradiation with auto-transplantation and the potential of an optimized irradiation facility could be identified. Within the requirements, turning the explanted organ over by 180° appears preferable to a whole side source, similar to a permanent rotation of the organ. The design study and the parameter optimization confirm the potential of this approach to treat metastases in explanted organs. The results do not represent actual treatment data but a first estimation. Although all specific values refer to the TRIGA Mainz, they may act as a useful guide for other types of neutron sources. The recommended modifications (Wortmann, 2008) show the suitability of TRIGA reactors as a radiation source for BNCT of extracorporeal irradiated and auto-transplanted organs. Copyright © 2012 Elsevier Ltd. All rights reserved.
Performance Trades Study for Robust Airfoil Shape Optimization
NASA Technical Reports Server (NTRS)
Li, Wu; Padula, Sharon
2003-01-01
From time to time, existing aircraft need to be redesigned for new missions with modified operating conditions such as required lift or cruise speed. This research is motivated by the needs of conceptual and preliminary design teams for smooth airfoil shapes that are similar to the baseline design but have improved drag performance over a range of flight conditions. The proposed modified profile optimization method (MPOM) modifies a large number of design variables to search for nonintuitive performance improvements, while avoiding off-design performance degradation. Given a good initial design, the MPOM generates fairly smooth airfoils that are better than the baseline without making drastic shape changes. Moreover, the MPOM allows users to gain valuable information by exploring performance trades over various design conditions. Four simulation cases of airfoil optimization in transonic viscous ow are included to demonstrate the usefulness of the MPOM as a performance trades study tool. Simulation results are obtained by solving fully turbulent Navier-Stokes equations and the corresponding discrete adjoint equations using an unstructured grid computational fluid dynamics code FUN2D.
Optimal Design of Cable-Driven Manipulators Using Particle Swarm Optimization.
Bryson, Joshua T; Jin, Xin; Agrawal, Sunil K
2016-08-01
The design of cable-driven manipulators is complicated by the unidirectional nature of the cables, which results in extra actuators and limited workspaces. Furthermore, the particular arrangement of the cables and the geometry of the robot pose have a significant effect on the cable tension required to effect a desired joint torque. For a sufficiently complex robot, the identification of a satisfactory cable architecture can be difficult and can result in multiply redundant actuators and performance limitations based on workspace size and cable tensions. This work leverages previous research into the workspace analysis of cable systems combined with stochastic optimization to develop a generalized methodology for designing optimized cable routings for a given robot and desired task. A cable-driven robot leg performing a walking-gait motion is used as a motivating example to illustrate the methodology application. The components of the methodology are described, and the process is applied to the example problem. An optimal cable routing is identified, which provides the necessary controllable workspace to perform the desired task and enables the robot to perform that task with minimal cable tensions. A robot leg is constructed according to this routing and used to validate the theoretical model and to demonstrate the effectiveness of the resulting cable architecture.
NASA Astrophysics Data System (ADS)
Medi, Bijan; Kazi, Monzure-Khoda; Amanullah, Mohammad
2013-06-01
Chromatography has been established as the method of choice for the separation and purification of optically pure drugs which has a market size of about 250 billion USD. Single column chromatography (SCC) is commonly used in the development and testing phase of drug development while multi-column Simulated Moving Bed (SMB) chromatography is more suitable for large scale production due to its continuous nature. In this study, optimal performance of SCC and SMB processes for the separation of optical isomers under linear and overloaded separation conditions has been investigated. The performance indicators, namely productivity and desorbent requirement have been compared under geometric similarity for the separation of a mixture of guaifenesin, and Tröger's base enantiomers. SCC process has been analyzed under equilibrium assumption i.e., assuming infinite column efficiency, and zero dispersion, and its optimal performance parameters are compared with the optimal prediction of an SMB process by triangle theory. Simulation results obtained using actual experimental data indicate that SCC may compete with SMB in terms of productivity depending on the molecules to be separated. Besides, insights into the process performances in terms of degree of freedom and relationship between the optimal operating point and solubility limit of the optical isomers have been ascertained. This investigation enables appropriate selection of single or multi-column chromatographic processes based on column packing properties and isotherm parameters.
Optimal digital filtering for tremor suppression.
Gonzalez, J G; Heredia, E A; Rahman, T; Barner, K E; Arce, G R
2000-05-01
Remote manually operated tasks such as those found in teleoperation, virtual reality, or joystick-based computer access, require the generation of an intermediate electrical signal which is transmitted to the controlled subsystem (robot arm, virtual environment, or a cursor in a computer screen). When human movements are distorted, for instance, by tremor, performance can be improved by digitally filtering the intermediate signal before it reaches the controlled device. This paper introduces a novel tremor filtering framework in which digital equalizers are optimally designed through pursuit tracking task experiments. Due to inherent properties of the man-machine system, the design of tremor suppression equalizers presents two serious problems: 1) performance criteria leading to optimizations that minimize mean-squared error are not efficient for tremor elimination and 2) movement signals show ill-conditioned autocorrelation matrices, which often result in useless or unstable solutions. To address these problems, a new performance indicator in the context of tremor is introduced, and the optimal equalizer according to this new criterion is developed. Ill-conditioning of the autocorrelation matrix is overcome using a novel method which we call pulled-optimization. Experiments performed with artificially induced vibrations and a subject with Parkinson's disease show significant improvement in performance. Additional results, along with MATLAB source code of the algorithms, and a customizable demo for PC joysticks, are available on the Internet at http:¿tremor-suppression.com.
Regenerative Life Support Systems Test Bed performance - Lettuce crop characterization
NASA Technical Reports Server (NTRS)
Barta, Daniel J.; Edeen, Marybeth A.; Eckhardt, Bradley D.
1992-01-01
System performance in terms of human life support requirements was evaluated for two crops of lettuce (Lactuca sative cv. Waldmann's Green) grown in the Regenerative Life Support Systems Test Bed. Each crop, grown in separate pots under identical environmental and cultural conditions, was irrigated with half-strength Hoagland's nutrient solution, with the frequency of irrigation being increased as the crop aged over the 30-day crop tests. Averaging over both crop tests, the test bed met the requirements of 2.1 person-days of oxygen production, 2.4 person-days of CO2 removal, and 129 person-days of potential potable water production. Gains in the mass of water and O2 produced and CO2 removed could be achieved by optimizing environmental conditions to increase plant growth rate and by optimizing cultural management methods.
Optimizing a mobile robot control system using GPU acceleration
NASA Astrophysics Data System (ADS)
Tuck, Nat; McGuinness, Michael; Martin, Fred
2012-01-01
This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.
Design optimization of piezoresistive cantilevers for force sensing in air and water
Doll, Joseph C.; Park, Sung-Jin; Pruitt, Beth L.
2009-01-01
Piezoresistive cantilevers fabricated from doped silicon or metal films are commonly used for force, topography, and chemical sensing at the micro- and macroscales. Proper design is required to optimize the achievable resolution by maximizing sensitivity while simultaneously minimizing the integrated noise over the bandwidth of interest. Existing analytical design methods are insufficient for modeling complex dopant profiles, design constraints, and nonlinear phenomena such as damping in fluid. Here we present an optimization method based on an analytical piezoresistive cantilever model. We use an existing iterative optimizer to minimimize a performance goal, such as minimum detectable force. The design tool is available as open source software. Optimal cantilever design and performance are found to strongly depend on the measurement bandwidth and the constraints applied. We discuss results for silicon piezoresistors fabricated by epitaxy and diffusion, but the method can be applied to any dopant profile or material which can be modeled in a similar fashion or extended to other microelectromechanical systems. PMID:19865512
Model-Based Design of Tree WSNs for Decentralized Detection †
Tantawy, Ashraf; Koutsoukos, Xenofon; Biswas, Gautam
2015-01-01
The classical decentralized detection problem of finding the optimal decision rules at the sensor and fusion center, as well as variants that introduce physical channel impairments have been studied extensively in the literature. The deployment of WSNs in decentralized detection applications brings new challenges to the field. Protocols for different communication layers have to be co-designed to optimize the detection performance. In this paper, we consider the communication network design problem for a tree WSN. We pursue a system-level approach where a complete model for the system is developed that captures the interactions between different layers, as well as different sensor quality measures. For network optimization, we propose a hierarchical optimization algorithm that lends itself to the tree structure, requiring only local network information. The proposed design approach shows superior performance over several contentionless and contention-based network design approaches. PMID:26307989
Dynamic characteristics of stay cables with inerter dampers
NASA Astrophysics Data System (ADS)
Shi, Xiang; Zhu, Songye
2018-06-01
This study systematically investigates the dynamic characteristics of a stay cable with an inerter damper installed close to one end of a cable. The interest in applying inerter dampers to stay cables is partially inspired by the superior damping performance of negative stiffness dampers in the same application. A comprehensive parametric study on two major parameters, namely, inertance and damping coefficients, are conducted using analytical and numerical approaches. An inerter damper can be optimized for one vibration mode of a stay cable by generating identical wave numbers in two adjacent modes. An optimal design approach is proposed for inerter dampers installed on stay cables. The corresponding optimal inertance and damping coefficients are summarized for different damper locations and interested modes. Inerter dampers can offer better damping performance than conventional viscous dampers for the target mode of a stay cable that requires optimization. However, additional damping ratios in other vibration modes through inerter damper are relatively limited.
Parameter optimization of electrochemical machining process using black hole algorithm
NASA Astrophysics Data System (ADS)
Singh, Dinesh; Shukla, Rajkamal
2017-12-01
Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Joshi, Suresh M.; Armstrong, Ernest S.
1993-01-01
An approach for an optimization-based integrated controls-structures design is presented for a class of flexible spacecraft that require fine attitude pointing and vibration suppression. The integrated design problem is posed in the form of simultaneous optimization of both structural and control design variables. The approach is demonstrated by application to the integrated design of a generic space platform and to a model of a ground-based flexible structure. The numerical results obtained indicate that the integrated design approach can yield spacecraft designs that have substantially superior performance over a conventional design wherein the structural and control designs are performed sequentially. For example, a 40-percent reduction in the pointing error is observed along with a slight reduction in mass, or an almost twofold increase in the controlled performance is indicated with more than a 5-percent reduction in the overall mass of the spacecraft (a reduction of hundreds of kilograms).
Systematic Sensor Selection Strategy (S4) User Guide
NASA Technical Reports Server (NTRS)
Sowers, T. Shane
2012-01-01
This paper describes a User Guide for the Systematic Sensor Selection Strategy (S4). S4 was developed to optimally select a sensor suite from a larger pool of candidate sensors based on their performance in a diagnostic system. For aerospace systems, selecting the proper sensors is important for ensuring adequate measurement coverage to satisfy operational, maintenance, performance, and system diagnostic criteria. S4 optimizes the selection of sensors based on the system fault diagnostic approach while taking conflicting objectives such as cost, weight and reliability into consideration. S4 can be described as a general architecture structured to accommodate application-specific components and requirements. It performs combinational optimization with a user defined merit or cost function to identify optimum or near-optimum sensor suite solutions. The S4 User Guide describes the sensor selection procedure and presents an example problem using an open source turbofan engine simulation to demonstrate its application.
NASA Technical Reports Server (NTRS)
Rousseau, J.; Hwang, K. C.
1975-01-01
Investigations aimed at the optimization of a baseline Rankine cycle solar powered air conditioner and the development of a preliminary system specification were conducted. Efforts encompassed the following: (1) investigations of the use of recuperators/regenerators to enhance the performance of the baseline system, (2) development of an off-design computer program for system performance prediction, (3) optimization of the turbocompressor design to cover a broad range of conditions and permit operation at low heat source water temperatures, (4) generation of parametric data describing system performance (COP and capacity), (5) development and evaluation of candidate system augmentation concepts and selection of the optimum approach, (6) generation of auxiliary power requirement data, (7) development of a complete solar collector-thermal storage-air conditioner computer program, (8) evaluation of the baseline Rankine air conditioner over a five day period simulating the NASA solar house operation, and (9) evaluation of the air conditioner as a heat pump.
Supersonic civil airplane study and design: Performance and sonic boom
NASA Technical Reports Server (NTRS)
Cheung, Samson
1995-01-01
Since aircraft configuration plays an important role in aerodynamic performance and sonic boom shape, the configuration of the next generation supersonic civil transport has to be tailored to meet high aerodynamic performance and low sonic boom requirements. Computational fluid dynamics (CFD) can be used to design airplanes to meet these dual objectives. The work and results in this report are used to support NASA's High Speed Research Program (HSRP). CFD tools and techniques have been developed for general usages of sonic boom propagation study and aerodynamic design. Parallel to the research effort on sonic boom extrapolation, CFD flow solvers have been coupled with a numeric optimization tool to form a design package for aircraft configuration. This CFD optimization package has been applied to configuration design on a low-boom concept and an oblique all-wing concept. A nonlinear unconstrained optimizer for Parallel Virtual Machine has been developed for aerodynamic design and study.
NASA Astrophysics Data System (ADS)
Han, Jinhyup; Hwang, Soo Min; Go, Wooseok; Senthilkumar, S. T.; Jeon, Donghoon; Kim, Youngsik
2018-01-01
Cell design and optimization of the components, including active materials and passive components, play an important role in constructing robust, high-performance rechargeable batteries. Seawater batteries, which utilize earth-abundant and natural seawater as the active material in an open-structured cathode, require a new platform for building and testing the cells other than typical Li-ion coin-type or pouch-type cells. Herein, we present new findings based on our optimized cell. Engineering the cathode components-improving the wettability of cathode current collector and seawater catholyte flow-improves the battery performance (voltage efficiency). Optimizing the cell component and design is the key to identifying the electrochemical processes and reactions of active materials. Hence, the outcome of this research can provide a systematic study of potentially active materials used in seawater batteries and their effectiveness on the electrochemical performance.
NASA Astrophysics Data System (ADS)
Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin
2018-06-01
Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.
Optimal Diabatic Dynamics of Majoarana-based Topological Qubits
NASA Astrophysics Data System (ADS)
Seradjeh, Babak; Rahmani, Armin; Franz, Marcel
In topological quantum computing, unitary operations on qubits are performed by adiabatic braiding of non-Abelian quasiparticles such as Majorana zero modes and are protected from local environmental perturbations. This scheme requires slow operations. By using the Pontryagin's maximum principle, here we show the same quantum gates can be implemented in much shorter times through optimal diabatic pulses. While our fast diabatic gates no not enjoy topological protection, they provide significant practical advantages due to their optimal speed and remarkable robustness to calibration errors and noise. NSERC, CIfAR, NSF DMR- 1350663, BSF 2014345.
Multidisciplinary Analysis and Optimization Generation 1 and Next Steps
NASA Technical Reports Server (NTRS)
Naiman, Cynthia Gutierrez
2008-01-01
The Multidisciplinary Analysis & Optimization Working Group (MDAO WG) of the Systems Analysis Design & Optimization (SAD&O) discipline in the Fundamental Aeronautics Program s Subsonic Fixed Wing (SFW) project completed three major milestones during Fiscal Year (FY)08: "Requirements Definition" Milestone (1/31/08); "GEN 1 Integrated Multi-disciplinary Toolset" (Annual Performance Goal) (6/30/08); and "Define Architecture & Interfaces for Next Generation Open Source MDAO Framework" Milestone (9/30/08). Details of all three milestones are explained including documentation available, potential partner collaborations, and next steps in FY09.
An optimal system design process for a Mars roving vehicle
NASA Technical Reports Server (NTRS)
Pavarini, C.; Baker, J.; Goldberg, A.
1971-01-01
The problem of determining the optimal design for a Mars roving vehicle is considered. A system model is generated by consideration of the physical constraints on the design parameters and the requirement that the system be deliverable to the Mars surface. An expression which evaluates system performance relative to mission goals as a function of the design parameters only is developed. The use of nonlinear programming techniques to optimize the design is proposed and an example considering only two of the vehicle subsystems is formulated and solved.
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2011-01-01
Orbit maintenance is the series of burns performed during a mission to ensure the orbit satisfies mission constraints. Low-altitude missions often require non-trivial orbit maintenance (Delta)V due to sizable orbital perturbations and minimum altitude thresholds. A strategy is presented for minimizing this (Delta)V using impulsive burn parameter optimization. An initial estimate for the burn parameters is generated by considering a feasible solution to the orbit maintenance problem. An example demonstrates the dV savings from the feasible solution to the optimal solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.
Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor stimulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly, and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adaptingmore » Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM execution time is proportionate to the number of triangle changes per frame, which is typically a few percent of the output mesh size, hence ROAM performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.« less
Modeling human decision making behavior in supervisory control
NASA Technical Reports Server (NTRS)
Tulga, M. K.; Sheridan, T. B.
1977-01-01
An optimal decision control model was developed, which is based primarily on a dynamic programming algorithm which looks at all the available task possibilities, charts an optimal trajectory, and commits itself to do the first step (i.e., follow the optimal trajectory during the next time period), and then iterates the calculation. A Bayesian estimator was included which estimates the tasks which might occur in the immediate future and provides this information to the dynamic programming routine. Preliminary trials comparing the human subject's performance to that of the optimal model show a great similarity, but indicate that the human skips certain movements which require quick change in strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Xiaobiao; Safranek, James
2014-09-01
Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications.
NASA Technical Reports Server (NTRS)
1990-01-01
Evaluations are summarized directed towards defining optimal instrumentation for performing planetary polarization measurements from a spacecraft platform. An overview of the science rationale for polarimetric measurements is given to point out the importance of such measurements for future studies and exploration of the outer planets. The key instrument features required to perform the needed measurements are discussed and applied to the requirements for the Cassini mission to Saturn. The resultant conceptual design of a spectro-polarimeter photometer for Cassini is described in detail.
Design of Launch Abort System Thrust Profile and Concept of Operations
NASA Technical Reports Server (NTRS)
Litton, Daniel; O'Keefe, Stephen A.; Winski, Richard G.; Davidson, John B.
2008-01-01
This paper describes how the Abort Motor thrust profile has been tailored and how optimizing the Concept of Operations on the Launch Abort System (LAS) of the Orion Crew Exploration Vehicle (CEV) aides in getting the crew safely away from a failed Crew Launch Vehicle (CLV). Unlike the passive nature of the Apollo system, the Orion Launch Abort Vehicle will be actively controlled, giving the program a more robust abort system with a higher probability of crew survival for an abort at all points throughout the CLV trajectory. By optimizing the concept of operations and thrust profile the Orion program will be able to take full advantage of the active Orion LAS. Discussion will involve an overview of the development of the abort motor thrust profile and the current abort concept of operations as well as their effects on the performance of LAS aborts. Pad Abort (for performance) and Maximum Drag (for separation from the Launch Vehicle) are the two points that dictate the required thrust and shape of the thrust profile. The results in this paper show that 95% success of all performance requirements is not currently met for Pad Abort. Future improvements to the current parachute sequence and other potential changes will mitigate the current problems, and meet abort performance requirements.
Orhan, A Emin; Ma, Wei Ji
2017-07-26
Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.
Trajectory Optimization of Electric Aircraft Subject to Subsystem Thermal Constraints
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Chin, Jeffrey C.; Schnulo, Sydney L.; Burt, Jonathan M.; Gray, Justin S.
2017-01-01
Electric aircraft pose a unique design challenge in that they lack a simple way to reject waste heat from the power train. While conventional aircraft reject most of their excess heat in the exhaust stream, for electric aircraft this is not an option. To examine the implications of this challenge on electric aircraft design and performance, we developed a model of the electric subsystems for the NASA X-57 electric testbed aircraft. We then coupled this model with a model of simple 2D aircraft dynamics and used a Legendre-Gauss-Lobatto collocation optimal control approach to find optimal trajectories for the aircraft with and without thermal constraints. The results show that the X-57 heat rejection systems are well designed for maximum-range and maximum-efficiency flight, without the need to deviate from an optimal trajectory. Stressing the thermal constraints by reducing the cooling capacity or requiring faster flight has a minimal impact on performance, as the trajectory optimization technique is able to find flight paths which honor the thermal constraints with relatively minor deviations from the nominal optimal trajectory.
DAKOTA Design Analysis Kit for Optimization and Terascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.
2010-02-24
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less
Miniaturized Air-to-Refrigerant Heat Exchangers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radermacher, Reinhard; Bacellar, Daniel; Aute, Vikrant
Air-to-refrigerant Heat eXchangers (HX) are an essential component of Heating, Ventilation, Air-Conditioning, and Refrigeration (HVAC&R) systems, serving as the main heat transfer component. The major limiting factor to HX performance is the large airside thermal resistance. Recent literature aims at improving heat transfer performance by utilizing enhancement methods such as fins and small tube diameters; this has lead to almost exhaustive research on the microchannel HX (MCHX). The objective of this project is to develop a miniaturized air-to-refrigerant HX with at least 20% reduction in volume, material volume, and approach temperature compared to current state-of-the-art multiport flat tube designs andmore » also be capable of production within five years. Moreover, the proposed HX’s are expected to have good water drainage and should succeed in both evaporator and condenser applications. The project leveraged Parallel-Parametrized Computational Fluid Dynamics (PPCFD) and Approximation-Assisted Optimization (AAO) techniques to perform multi-scale analysis and shape optimization with the intent of developing novel HX designs whose thermal-hydraulic performance exceeds that of state-of-the-art MCHX. Nine heat exchanger geometries were initially chosen for detailed analysis, selected from 35+ geometries which were identified in previous work at the University of Maryland, College Park. The newly developed optimization framework was exercised for three design optimization problems: (DP I) 1.0kW radiator, (DP II) 10kW radiator and (DP III) 10kW two-phase HX. DP I consisted of the design and optimization of 1.0kW air-to-water HX’s which exceeded the project requirements of 20% volume/material reduction and 20% better performance. Two prototypes for the 1.0kW HX were prototyped, tested and validated using newly-designed airside and refrigerant side test facilities. DP II, a scaled version DP I for 10kW air-to-water HX applications, also yielded optimized HX designs which met project requirements. Attempts to prototype a 10kW have presented unique manufacturing challenges, especially regarding tube blockages and structural stability. DP III comprised optimizing two-phase HX’s for a 3.0Ton capacity in a heat pump / air-conditioning unit for cooling mode application using R410A as the working fluid. The HX’s theoretically address the project requirements. System-level analysis showed the HX’s achieved up to 15% improvement in COP while also reducing overall unit charge by 30-40%. The project methodology was capable of developing HX’s which can outperform current state-of-the-art MCHX by at least 20% reduction in volume, material volume, and approach temperature. Additionally, the capability for optimization using refrigerant charge as an objective function was developed. The five-year manufacturing feasibility of the proposed HX’s was shown to have a good outlook. Successful prototyping through both conventional manufacturing methods and next generation methods such as additive manufacturing was achieved.« less
Tool Support for Software Lookup Table Optimization
Wilcox, Chris; Strout, Michelle Mills; Bieman, James M.
2011-01-01
A number of scientific applications are performance-limited by expressions that repeatedly call costly elementary functions. Lookup table (LUT) optimization accelerates the evaluation of such functions by reusing previously computed results. LUT methods can speed up applications that tolerate an approximation of function results, thereby achieving a high level of fuzzy reuse. One problem with LUT optimization is the difficulty of controlling the tradeoff between performance and accuracy. The current practice of manual LUT optimization adds programming effort by requiring extensive experimentation to make this tradeoff, and such hand tuning can obfuscate algorithms. In this paper we describe a methodology andmore » tool implementation to improve the application of software LUT optimization. Our Mesa tool implements source-to-source transformations for C or C++ code to automate the tedious and error-prone aspects of LUT generation such as domain profiling, error analysis, and code generation. We evaluate Mesa with five scientific applications. Our results show a performance improvement of 3.0× and 6.9× for two molecular biology algorithms, 1.4× for a molecular dynamics program, 2.1× to 2.8× for a neural network application, and 4.6× for a hydrology calculation. We find that Mesa enables LUT optimization with more control over accuracy and less effort than manual approaches.« less
Fast H.264/AVC FRExt intra coding using belief propagation.
Milani, Simone
2011-01-01
In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.
Elimination sequence optimization for SPAR
NASA Technical Reports Server (NTRS)
Hogan, Harry A.
1986-01-01
SPAR is a large-scale computer program for finite element structural analysis. The program allows user specification of the order in which the joints of a structure are to be eliminated since this order can have significant influence over solution performance, in terms of both storage requirements and computer time. An efficient elimination sequence can improve performance by over 50% for some problems. Obtaining such sequences, however, requires the expertise of an experienced user and can take hours of tedious effort to affect. Thus, an automatic elimination sequence optimizer would enhance productivity by reducing the analysts' problem definition time and by lowering computer costs. Two possible methods for automating the elimination sequence specifications were examined. Several algorithms based on the graph theory representations of sparse matrices were studied with mixed results. Significant improvement in the program performance was achieved, but sequencing by an experienced user still yields substantially better results. The initial results provide encouraging evidence that the potential benefits of such an automatic sequencer would be well worth the effort.
Ma, Jian; Bai, Bing; Wang, Liu-Jun; Tong, Cun-Zhu; Jin, Ge; Zhang, Jun; Pan, Jian-Wei
2016-09-20
InGaAs/InP single-photon avalanche diodes (SPADs) are widely used in practical applications requiring near-infrared photon counting such as quantum key distribution (QKD). Photon detection efficiency and dark count rate are the intrinsic parameters of InGaAs/InP SPADs, due to the fact that their performances cannot be improved using different quenching electronics given the same operation conditions. After modeling these parameters and developing a simulation platform for InGaAs/InP SPADs, we investigate the semiconductor structure design and optimization. The parameters of photon detection efficiency and dark count rate highly depend on the variables of absorption layer thickness, multiplication layer thickness, excess bias voltage, and temperature. By evaluating the decoy-state QKD performance, the variables for SPAD design and operation can be globally optimized. Such optimization from the perspective of specific applications can provide an effective approach to design high-performance InGaAs/InP SPADs.
Reliability based design optimization: Formulations and methodologies
NASA Astrophysics Data System (ADS)
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.
Chasing a Comet with a Solar Sail
NASA Technical Reports Server (NTRS)
Stough, Robert W.; Heaton, Andrew F.; Whorton, Mark S.
2008-01-01
Solar sail propulsion systems enable a wide range of missions that require constant thrust or high delta-V over long mission times. One particularly challenging mission type is a comet rendezvous mission. This paper presents optimal low-thrust trajectory designs for a range of sailcraft performance metrics and mission transit times that enables a comet rendezvous mission. These optimal trajectory results provide a trade space which can be parameterized in terms of mission duration and sailcraft performance parameters such that a design space for a small satellite comet chaser mission is identified. These results show that a feasible space exists for a small satellite to perform a comet chaser mission in a reasonable mission time.
Numerical aerodynamic simulation facility. Preliminary study extension
NASA Technical Reports Server (NTRS)
1978-01-01
The production of an optimized design of key elements of the candidate facility was the primary objective of this report. This was accomplished by effort in the following tasks: (1) to further develop, optimize and describe the function description of the custom hardware; (2) to delineate trade off areas between performance, reliability, availability, serviceability, and programmability; (3) to develop metrics and models for validation of the candidate systems performance; (4) to conduct a functional simulation of the system design; (5) to perform a reliability analysis of the system design; and (6) to develop the software specifications to include a user level high level programming language, a correspondence between the programming language and instruction set and outline the operation system requirements.
Theory and design of interferometric synthetic aperture radars
NASA Technical Reports Server (NTRS)
Rodriguez, E.; Martin, J. M.
1992-01-01
A derivation of the signal statistics, an optimal estimator of the interferometric phase, and the expression necessary to calculate the height-error budget are presented. These expressions are used to derive methods of optimizing the parameters of the interferometric synthetic aperture radar system (InSAR), and are then employed in a specific design example for a system to perform high-resolution global topographic mapping with a one-year mission lifetime, subject to current technological constraints. A Monte Carlo simulation of this InSAR system is performed to evaluate its performance for realistic topography. The results indicate that this system has the potential to satisfy the stringent accuracy and resolution requirements for geophysical use of global topographic data.
Assessing performance in complex team environments.
Whitmore, Jeffrey N
2005-07-01
This paper provides a brief introduction to team performance assessment. It highlights some critical aspects leading to the successful measurement of team performance in realistic console operations; discusses the idea of process and outcome measures; presents two types of team data collection systems; and provides an example of team performance assessment. Team performance assessment is a complicated endeavor relative to assessing individual performance. Assessing team performance necessitates a clear understanding of each operator's task, both at the individual and team level, and requires planning for efficient data capture and analysis. Though team performance assessment requires considerable effort, the results can be very worthwhile. Most tasks performed in Command and Control environments are team tasks, and understanding this type of performance is becoming increasingly important to the evaluation of mission success and for overall system optimization.
Spiking neural network simulation: memory-optimal synaptic event scheduling.
Stewart, Robert D; Gurney, Kevin N
2011-06-01
Spiking neural network simulations incorporating variable transmission delays require synaptic events to be scheduled prior to delivery. Conventional methods have memory requirements that scale with the total number of synapses in a network. We introduce novel scheduling algorithms for both discrete and continuous event delivery, where the memory requirement scales instead with the number of neurons. Superior algorithmic performance is demonstrated using large-scale, benchmarking network simulations.
Aerodynamics as a subway design parameter
NASA Technical Reports Server (NTRS)
Kurtz, D. W.
1976-01-01
A parametric sensitivity study has been performed on the system operational energy requirement in order to guide subway design strategy. Aerodynamics can play a dominant or trivial role, depending upon the system characteristics. Optimization of the aerodynamic parameters may not minimize the total operational energy. Isolation of the station box from the tunnel and reduction of the inertial power requirements pay the largest dividends in terms of the operational energy requirement.
Actor-critic-based optimal tracking for partially unknown nonlinear discrete-time systems.
Kiumarsi, Bahare; Lewis, Frank L
2015-01-01
This paper presents a partially model-free adaptive optimal control solution to the deterministic nonlinear discrete-time (DT) tracking control problem in the presence of input constraints. The tracking error dynamics and reference trajectory dynamics are first combined to form an augmented system. Then, a new discounted performance function based on the augmented system is presented for the optimal nonlinear tracking problem. In contrast to the standard solution, which finds the feedforward and feedback terms of the control input separately, the minimization of the proposed discounted performance function gives both feedback and feedforward parts of the control input simultaneously. This enables us to encode the input constraints into the optimization problem using a nonquadratic performance function. The DT tracking Bellman equation and tracking Hamilton-Jacobi-Bellman (HJB) are derived. An actor-critic-based reinforcement learning algorithm is used to learn the solution to the tracking HJB equation online without requiring knowledge of the system drift dynamics. That is, two neural networks (NNs), namely, actor NN and critic NN, are tuned online and simultaneously to generate the optimal bounded control policy. A simulation example is given to show the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Cho, G. S.
2017-09-01
For performance optimization of Refrigerated Warehouses, design parameters are selected based on the physical parameters such as number of equipment and aisles, speeds of forklift for ease of modification. This paper provides a comprehensive framework approach for the system design of Refrigerated Warehouses. We propose a modeling approach which aims at the simulation optimization so as to meet required design specifications using the Design of Experiment (DOE) and analyze a simulation model using integrated aspect-oriented modeling approach (i-AOMA). As a result, this suggested method can evaluate the performance of a variety of Refrigerated Warehouses operations.
Damm, Irina; Enger, Eileen; Chrubasik-Hausmann, Sigrun; Schieber, Andreas; Zimmermann, Benno F
2016-08-01
Fast methods for the extraction and analysis of various secondary metabolites from cocoa products were developed and optimized regarding speed and separation efficiency. Extraction by pressurized liquid extraction is automated and the extracts are analyzed by rapid reversed-phase ultra high-performance liquid chromatography and normal-phase high-performance liquid chromatography methods. After extraction, no further sample treatment is required before chromatographic analysis. The analytes comprise monomeric and oligomeric flavanols, flavonols, methylxanthins, N-phenylpropenoyl amino acids, and phenolic acids. Polyphenols and N-phenylpropenoyl amino acids are separated in a single run of 33 min, procyanidins are analyzed by normal-phase high-performance liquid chromatography within 16 min, and methylxanthins require only 6 min total run time. A fourth method is suitable for phenolic acids, but only protocatechuic acid was found in relevant quantities. The optimized methods were validated and applied to 27 dark chocolates, one milk chocolate, two cocoa powders and two food supplements based on cocoa extract. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Application of the gravity search algorithm to multi-reservoir operation optimization
NASA Astrophysics Data System (ADS)
Bozorg-Haddad, Omid; Janbaz, Mahdieh; Loáiciga, Hugo A.
2016-12-01
Complexities in river discharge, variable rainfall regime, and drought severity merit the use of advanced optimization tools in multi-reservoir operation. The gravity search algorithm (GSA) is an evolutionary optimization algorithm based on the law of gravity and mass interactions. This paper explores the GSA's efficacy for solving benchmark functions, single reservoir, and four-reservoir operation optimization problems. The GSA's solutions are compared with those of the well-known genetic algorithm (GA) in three optimization problems. The results show that the GSA's results are closer to the optimal solutions than the GA's results in minimizing the benchmark functions. The average values of the objective function equal 1.218 and 1.746 with the GSA and GA, respectively, in solving the single-reservoir hydropower operation problem. The global solution equals 1.213 for this same problem. The GSA converged to 99.97% of the global solution in its average-performing history, while the GA converged to 97% of the global solution of the four-reservoir problem. Requiring fewer parameters for algorithmic implementation and reaching the optimal solution in fewer number of functional evaluations are additional advantages of the GSA over the GA. The results of the three optimization problems demonstrate a superior performance of the GSA for optimizing general mathematical problems and the operation of reservoir systems.
High-Lift Optimization Design Using Neural Networks on a Multi-Element Airfoil
NASA Technical Reports Server (NTRS)
Greenman, Roxana M.; Roth, Karlin R.; Smith, Charles A. (Technical Monitor)
1998-01-01
The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag, and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural networks were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 83% compared with traditional gradient-based optimization procedures for multiple optimization runs.
Time-optimal aircraft pursuit-evasion with a weapon envelope constraint
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Duke, E. L.
1990-01-01
The optimal pursuit-evasion problem between two aircraft, including nonlinear point-mass vehicle models and a realistic weapon envelope, is analyzed. Using a linear combination of flight time and the square of the vehicle acceleration as the performance index, a closed-form solution is obtained in nonlinear feedback form. Due to its modest computational requirements, this guidance law can be used for onboard real-time implementation.
TAGS 85/2N RTG Power for Viking Lander Capsule
DOE R&D Accomplishments Database
1969-08-01
Results of studies performed by Isotopes, Inc., Nuclear Systems Division, to optimize and baseline a TAGS 85/2N RTG for the Viking Lander Capsule prime electrical power source are presented. These studies generally encompassed identifying the Viking RTG mission profile and design requirements, and establishing a baseline RTG design consistent with these requirements.
Kazakis, Georgios; Kanellopoulos, Ioannis; Sotiropoulos, Stefanos; Lagaros, Nikos D
2017-10-01
Construction industry has a major impact on the environment that we spend most of our life. Therefore, it is important that the outcome of architectural intuition performs well and complies with the design requirements. Architects usually describe as "optimal design" their choice among a rather limited set of design alternatives, dictated by their experience and intuition. However, modern design of structures requires accounting for a great number of criteria derived from multiple disciplines, often of conflicting nature. Such criteria derived from structural engineering, eco-design, bioclimatic and acoustic performance. The resulting vast number of alternatives enhances the need for computer-aided architecture in order to increase the possibility of arriving at a more preferable solution. Therefore, the incorporation of smart, automatic tools in the design process, able to further guide designer's intuition becomes even more indispensable. The principal aim of this study is to present possibilities to integrate automatic computational techniques related to topology optimization in the phase of intuition of civil structures as part of computer aided architectural design. In this direction, different aspects of a new computer aided architectural era related to the interpretation of the optimized designs, difficulties resulted from the increased computational effort and 3D printing capabilities are covered here in.
Electric power market agent design
NASA Astrophysics Data System (ADS)
Oh, Hyungseon
The electric power industry in many countries has been restructured in the hope of a more economically efficient system. In the restructured system, traditional operating and planning tools based on true marginal cost do not perform well since information required is strictly confidential. For developing a new tool, it is necessary to understand offer behavior. The main objective of this study is to create a new tool for power system planning. For the purpose, this dissertation develops models for a market and market participants. A new model is developed in this work for explaining a supply-side offer curve, and several variables are introduced to characterize the curve. Demand is estimated using a neural network, and a numerical optimization process is used to determine the values of the variables that maximize the profit of the agent. The amount of data required for the optimization is chosen with the aid of nonlinear dynamics. To suggest an optimal demand-side bidding function, two optimization problems are constructed and solved for maximizing consumer satisfaction based on the properties of two different types of demands: price-based demand and must-be-served demand. Several different simulations are performed to test how an agent reacts in various situations. The offer behavior depends on locational benefit as well as the offer strategies of competitors.
NASA Astrophysics Data System (ADS)
Schämann, M.; Bücker, M.; Hessel, S.; Langmann, U.
2008-05-01
High data rates combined with high mobility represent a challenge for the design of cellular devices. Advanced algorithms are required which result in higher complexity, more chip area and increased power consumption. However, this contrasts to the limited power supply of mobile devices. This presentation discusses the application of an HSDPA receiver which has been optimized regarding power consumption with the focus on the algorithmic and architectural level. On algorithmic level the Rake combiner, Prefilter-Rake equalizer and MMSE equalizer are compared regarding their BER performance. Both equalizer approaches provide a significant increase of performance for high data rates compared to the Rake combiner which is commonly used for lower data rates. For both equalizer approaches several adaptive algorithms are available which differ in complexity and convergence properties. To identify the algorithm which achieves the required performance with the lowest power consumption the algorithms have been investigated using SystemC models regarding their performance and arithmetic complexity. Additionally, for the Prefilter Rake equalizer the power estimations of a modified Griffith (LMS) and a Levinson (RLS) algorithm have been compared with the tool ORINOCO supplied by ChipVision. The accuracy of this tool has been verified with a scalable architecture of the UMTS channel estimation described both in SystemC and VHDL targeting a 130 nm CMOS standard cell library. An architecture combining all three approaches combined with an adaptive control unit is presented. The control unit monitors the current condition of the propagation channel and adjusts parameters for the receiver like filter size and oversampling ratio to minimize the power consumption while maintaining the required performance. The optimization strategies result in a reduction of the number of arithmetic operations up to 70% for single components which leads to an estimated power reduction of up to 40% while the BER performance is not affected. This work utilizes SystemC and ORINOCO for the first estimation of power consumption in an early step of the design flow. Thereby algorithms can be compared in different operating modes including the effects of control units. Here an algorithm having higher peak complexity and power consumption but providing more flexibility showed less consumption for normal operating modes compared to the algorithm which is optimized for peak performance.
The cost of model reference adaptive control - Analysis, experiments, and optimization
NASA Technical Reports Server (NTRS)
Messer, R. S.; Haftka, R. T.; Cudney, H. H.
1993-01-01
In this paper the performance of Model Reference Adaptive Control (MRAC) is studied in numerical simulations and verified experimentally with the objective of understanding how differences between the plant and the reference model affect the control effort. MRAC is applied analytically and experimentally to a single degree of freedom system and analytically to a MIMO system with controlled differences between the model and the plant. It is shown that the control effort is sensitive to differences between the plant and the reference model. The effects of increased damping in the reference model are considered, and it is shown that requiring the controller to provide increased damping actually decreases the required control effort when differences between the plant and reference model exist. This result is useful because one of the first attempts to counteract the increased control effort due to differences between the plant and reference model might be to require less damping, however, this would actually increase the control effort. Optimization of weighting matrices is shown to help reduce the increase in required control effort. However, it was found that eventually the optimization resulted in a design that required an extremely high sampling rate for successful realization.
Extremal Optimization: Methods Derived from Co-Evolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boettcher, S.; Percus, A.G.
1999-07-13
We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance provesmore » competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.« less
A Multi-Objective Approach to Tactical Maneuvering Within Real Time Strategy Games
The resulting agent does not require the usage of training or tree searches to optimize, allowing for consist effective performance across all scenarios against a variety of opposing tactical options.
The Development of Lightweight Commercial Vehicle Wheels Using Microalloying Steel
NASA Astrophysics Data System (ADS)
Lu, Hongzhou; Zhang, Lilong; Wang, Jiegong; Xuan, Zhaozhi; Liu, Xiandong; Guo, Aimin; Wang, Wenjun; Lu, Guimin
Lightweight wheels can reduce weight about 100kg for commercial vehicles, and it can save energy and reduce emission, what's more, it can enhance the profits for logistics companies. The development of lightweight commercial vehicle wheels is achieved by the development of new steel for rim, the process optimization of flash butt welding, and structure optimization by finite element methods. Niobium micro-alloying technology can improve hole expansion rate, weldability and fatigue performance of wheel steel, and based on Niobium micro-alloying technology, a special wheel steel has been studied whose microstructure are Ferrite and Bainite, with high formability and high fatigue performance, and stable mechanical properties. The content of Nb in this new steel is 0.025% and the hole expansion rate is ≥ 100%. At the same time, welding parameters including electric upsetting time, upset allowance, upsetting pressure and flash allowance are optimized, and by CAE analysis, an optimized structure has been attained. As a results, the weight of 22.5in×8.25in wheel is up to 31.5kg, which is most lightweight comparing the same size wheels. And its functions including bending fatigue performance and radial fatigue performance meet the application requirements of truck makers and logistics companies.
On sustainable and efficient design of ground-source heat pump systems
NASA Astrophysics Data System (ADS)
Grassi, W.; Conti, P.; Schito, E.; Testi, D.
2015-11-01
This paper is mainly aimed at stressing some fundamental features of the GSHP design and is based on a broad research we are performing at the University of Pisa. In particular, we focus the discussion on an environmentally sustainable approach, based on performance optimization during the entire operational life. The proposed methodology aims at investigating design and management strategies to find the optimal level of exploitation of the ground source and refer to other technical means to cover the remaining energy requirements and modulate the power peaks. The method is holistic, considering the system as a whole, rather than focusing only on some components, usually considered as the most important ones. Each subsystem is modeled and coupled to the others in a full set of equations, which is used within an optimization routine to reproduce the operative performances of the overall GSHP system. As a matter of fact, the recommended methodology is a 4-in-1 activity, including sizing of components, lifecycle performance evaluation, optimization process, and feasibility analysis. The paper reviews also some previous works concerning possible applications of the proposed methodology. In conclusion, we describe undergoing research activities and objectives of future works.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Zhenhuan; Boyuka, David; Zou, X
Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less
Model-based optimization of near-field binary-pixelated beam shapers
Dorrer, C.; Hassett, J.
2017-01-23
The optimization of components that rely on spatially dithered distributions of transparent or opaque pixels and an imaging system with far-field filtering for transmission control is demonstrated. The binary-pixel distribution can be iteratively optimized to lower an error function that takes into account the design transmission and the characteristics of the required far-field filter. Simulations using a design transmission chosen in the context of high-energy lasers show that the beam-fluence modulation at an image plane can be reduced by a factor of 2, leading to performance similar to using a non-optimized spatial-dithering algorithm with pixels of size reduced by amore » factor of 2 without the additional fabrication complexity or cost. The optimization process preserves the pixel distribution statistical properties. Analysis shows that the optimized pixel distribution starting from a high-noise distribution defined by a random-draw algorithm should be more resilient to fabrication errors than the optimized pixel distributions starting from a low-noise, error-diffusion algorithm, while leading to similar beamshaping performance. Furthermore, this is confirmed by experimental results obtained with various pixel distributions and induced fabrication errors.« less
Performance characteristics of long-track speed skaters: a literature review.
Konings, Marco J; Elferink-Gemser, Marije T; Stoter, Inge K; van der Meer, Dirk; Otten, Egbert; Hettinga, Florentina J
2015-04-01
Speed skating is an intriguing sport to study from different perspectives due to the peculiar way of motion and the multiple determinants for performance. This review aimed to identify what is known on (long-track) speed skating, and which individual characteristics determine speed skating performance. A total of 49 studies were included. Based on a multidimensional performance model, person-related performance characteristics were categorized in anthropometrical, technical, physiological, tactical, and psychological characteristics. Literature was found on anthropometry, technique, physiology, and tactics. However, psychological studies were clearly under-represented. In particular, the role of self-regulation might deserve more attention to further understand mechanisms relevant for optimal performance and for instance pacing. Another remarkable finding was that the technically/biomechanically favourable crouched skating technique (i.e. small knee and trunk angle) leads to a physiological disadvantage: a smaller knee angle may increase the deoxygenation of the working muscles. This is an important underlying aspect for the pacing tactics in speed skating. Elite speed skaters need to find the optimal balance between obtaining a fast start and preventing negative technical adaptations later on in the race by distributing their available energy over the race in an optimal way. More research is required to gain more insight into how this impacts on the processes of fatigue and coordination during speed skating races. This can lead to a better understanding on how elite speed skaters can maintain the optimal technical characteristics throughout the entire race, and how they can adapt their pacing to optimize all identified aspects that determine performance.
Using SpF to Achieve Petascale for Legacy Pseudospectral Applications
NASA Technical Reports Server (NTRS)
Clune, Thomas L.; Jiang, Weiyuan
2014-01-01
Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. Highlevel abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical kernels that can be performed entirely inprocessor. The granularity of domain decomposition provided by SpF is only constrained by the datalocality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe our experience in porting legacy pseudospectral models, MoSST and DYNAMO, to use SpF as well as present preliminary performance results provided by the improved scalability.
MODELING AND PERFORMANCE EVALUATION FOR AVIATION SECURITY CARGO INSPECTION QUEUING SYSTEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgood, Glenn O; Olama, Mohammed M; Rose, Terri A
Beginning in 2010, the U.S. will require that all cargo loaded in passenger aircraft be inspected. This will require more efficient processing of cargo and will have a significant impact on the inspection protocols and business practices of government agencies and the airlines. In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, and throughput. These metrics aremore » performance indicators of the system s ability to service current needs and response capacity to additional requests. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures will reduce the overall cost and shipping delays associated with the new inspection requirements.« less
Treatment of anemia in chronic kidney disease: known, unknown, and both.
Foley, Robert N
2011-01-01
Erythropoiesis is a rapidly evolving research arena and several mechanistic insights show therapeutic promise. In contrast with the rapid advance of mechanistic science, optimal management of anemia in patients with chronic kidney disease remains a difficult and polarizing issue. Although several large hemoglobin target trials have been performed, optimal treatment targets remain elusive, because none of the large trials to date have unequivocally identified differences in primary outcome rates or death rates, and because other reported outcomes indicate the potential for harm (rates of stroke, early requirement for dialysis, and vascular access thrombosis) and benefit (reductions in transfusion requirements and fatigue).
Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster
NASA Technical Reports Server (NTRS)
Story, George
2015-01-01
Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. One remaining issue is the cost of hybrids versus the existing launch propulsion systems. This paper will review the known state-of-the-art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.
System Risk Assessment and Allocation in Conceptual Design
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Smith, Natasha L.; Zang, Thomas A. (Technical Monitor)
2003-01-01
As aerospace systems continue to evolve in addressing newer challenges in air and space transportation, there exists a heightened priority for significant improvement in system performance, cost effectiveness, reliability, and safety. Tools, which synthesize multidisciplinary integration, probabilistic analysis, and optimization, are needed to facilitate design decisions allowing trade-offs between cost and reliability. This study investigates tools for probabilistic analysis and probabilistic optimization in the multidisciplinary design of aerospace systems. A probabilistic optimization methodology is demonstrated for the low-fidelity design of a reusable launch vehicle at two levels, a global geometry design and a local tank design. Probabilistic analysis is performed on a high fidelity analysis of a Navy missile system. Furthermore, decoupling strategies are introduced to reduce the computational effort required for multidisciplinary systems with feedback coupling.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Tian, Y. M.; Wang, K. Y.; Li, G.; Zou, X. W.; Chai, Y. S.
2017-09-01
This study focused on optimization method of a ceramic proppant material with both low cost and high performance that met the requirements of Chinese Petroleum and Gas Industry Standard (SY/T 5108-2006). The orthogonal experimental design of L9(34) was employed to study the significance sequence of three factors, including weight ratio of white clay to bauxite, dolomite content and sintering temperature. For the crush resistance, both the range analysis and variance analysis reflected the optimally experimental condition was weight ratio of white clay to bauxite=3/7, dolomite content=3 wt.%, temperature=1350°C. For the bulk density, the most important factor was the sintering temperature, followed by the dolomite content, and then the ratio of white clay to bauxite.
Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster
NASA Technical Reports Server (NTRS)
Story, George
2014-01-01
Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and later on solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. A remaining issue is the cost of hybrids vs the existing launch propulsion systems. This paper will review the known state of the art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.
NASA Astrophysics Data System (ADS)
Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.
2017-05-01
Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.
Peak Seeking Control for Reduced Fuel Consumption with Preliminary Flight Test Results
NASA Technical Reports Server (NTRS)
Brown, Nelson
2012-01-01
The Environmentally Responsible Aviation project seeks to accomplish the simultaneous reduction of fuel burn, noise, and emissions. A project at NASA Dryden Flight Research Center is contributing to ERAs goals by exploring the practical application of real-time trim configuration optimization for enhanced performance and reduced fuel consumption. This peak-seeking control approach is based on Newton-Raphson algorithm using a time-varying Kalman filter to estimate the gradient of the performance function. In real-time operation, deflection of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of a modified F-18 are directly optimized, and the horizontal stabilators and angle of attack are indirectly optimized. Preliminary results from three research flights are presented herein. The optimization system found a trim configuration that required approximately 3.5% less fuel flow than the baseline trim at the given flight condition. The algorithm consistently rediscovered the solution from several initial conditions. These preliminary results show the algorithm has good performance and is expected to show similar results at other flight conditions and aircraft configurations.
Aerodynamics and Optimal Design of Biplane Wind Turbine Blades
NASA Astrophysics Data System (ADS)
Chiu, Phillip
In order to improve energy capture and reduce the cost of wind energy, in the past few decades wind turbines have grown significantly larger. As their blades get longer, the design of the inboard region (near the blade root) becomes a trade-off between competing structural and aerodynamic requirements. State-of-the-art blades require thick airfoils near the root to efficiently support large loads inboard, but those thick airfoils have inherently poor aerodynamic performance. New designs are required to circumvent this design compromise. One such design is the "biplane blade", in which the thick airfoils in the inboard region are replaced with thinner airfoils in a biplane configuration. This design was shown previously to have significantly increased structural performance over conventional blades. In addition, the biplane airfoils can provide increased lift and aerodynamic efficiency compared to thick monoplane inboard airfoils, indicating a potential for increased power extraction. This work investigates the fundamental aerodynamic aspects, aerodynamic design and performance, and optimal structural design of the biplane blade. First, the two-dimensional aerodynamics of biplanes with relatively thick airfoils are investigated, showing unique phenomena which arise as a result of airfoil thickness. Next, the aerodynamic design of the full biplane blade is considered. Two biplane blades are designed for optimal aerodynamic loading, and their aerodynamic performance quantified. Considering blades with practical chord distributions and including the drag of the mid-blade joint, it is shown that biplane blades have comparable power output to conventional monoplane designs. The results of this analysis also show that the biplane blades can be designed with significantly less chord than conventional designs, a characteristic which enables larger blade designs. The aerodynamic loads on the biplane blades are shown to be increased in gust conditions and decreased under extreme conditions. Finally, considering these aerodynamic loads, the blade mass reductions achievable by biplane blades are quantified. The internal structure of the biplane blades are designed using a multi-disciplinary optimization which seeks to minimize mass, subject to constraints which represent realistic design requirements. Using this approach, it is shown that biplane blades can be built more than 45% lighter than a similarly-optimized conventional blade; the reasons for these mass reductions are examined in detail. As blade length is increased, these mass reductions are shown to be even more significant. These large mass reductions are indicative of significant cost of electricity reductions from rotors fitted with biplane blades. Taken together, these results show that biplane blades are a concept which can enable the next generation of larger wind turbine rotors.
Regression analysis as a design optimization tool
NASA Technical Reports Server (NTRS)
Perley, R.
1984-01-01
The optimization concepts are described in relation to an overall design process as opposed to a detailed, part-design process where the requirements are firmly stated, the optimization criteria are well established, and a design is known to be feasible. The overall design process starts with the stated requirements. Some of the design criteria are derived directly from the requirements, but others are affected by the design concept. It is these design criteria that define the performance index, or objective function, that is to be minimized within some constraints. In general, there will be multiple objectives, some mutually exclusive, with no clear statement of their relative importance. The optimization loop that is given adjusts the design variables and analyzes the resulting design, in an iterative fashion, until the objective function is minimized within the constraints. This provides a solution, but it is only the beginning. In effect, the problem definition evolves as information is derived from the results. It becomes a learning process as we determine what the physics of the system can deliver in relation to the desirable system characteristics. As with any learning process, an interactive capability is a real attriubute for investigating the many alternatives that will be suggested as learning progresses.
Driver electronics design and control for a total artificial heart linear motor.
Unthan, Kristin; Cuenca-Navalon, Elena; Pelletier, Benedikt; Finocchiaro, Thomas; Steinseifer, Ulrich
2018-01-27
For any implantable device size and efficiency are critical properties. Thus, a linear motor for a Total Artificial Heart was optimized with focus on driver electronics and control strategies. Hardware requirements were defined from power supply and motor setup. Four full bridges were chosen for the power electronics. Shunt resistors were set up for current measurement. Unipolar and bipolar switching for power electronics control were compared regarding current ripple and power losses. Here, unipolar switching showed smaller current ripple and required less power to create the necessary motor forces. Based on calculations for minimal power losses Lorentz force was distributed to the actor's four coils. The distribution was determined as ratio of effective magnetic flux through each coil, which was captured by a force test rig. Static and dynamic measurements under physiological conditions analyzed interaction of control and hardware and all efficiencies were over 89%. In conclusion, the designed electronics, optimized control strategy and applied current distribution create the required motor force and perform optimal under physiological conditions. The developed driver electronics and control offer optimized size and efficiency for any implantable or portable device with multiple independent motor coils. Graphical Abstract ᅟ.
SVM-Based Synthetic Fingerprint Discrimination Algorithm and Quantitative Optimization Strategy
Chen, Suhang; Chang, Sheng; Huang, Qijun; He, Jin; Wang, Hao; Huang, Qiangui
2014-01-01
Synthetic fingerprints are a potential threat to automatic fingerprint identification systems (AFISs). In this paper, we propose an algorithm to discriminate synthetic fingerprints from real ones. First, four typical characteristic factors—the ridge distance features, global gray features, frequency feature and Harris Corner feature—are extracted. Then, a support vector machine (SVM) is used to distinguish synthetic fingerprints from real fingerprints. The experiments demonstrate that this method can achieve a recognition accuracy rate of over 98% for two discrete synthetic fingerprint databases as well as a mixed database. Furthermore, a performance factor that can evaluate the SVM's accuracy and efficiency is presented, and a quantitative optimization strategy is established for the first time. After the optimization of our synthetic fingerprint discrimination task, the polynomial kernel with a training sample proportion of 5% is the optimized value when the minimum accuracy requirement is 95%. The radial basis function (RBF) kernel with a training sample proportion of 15% is a more suitable choice when the minimum accuracy requirement is 98%. PMID:25347063
Power-constrained supercomputing
NASA Astrophysics Data System (ADS)
Bailey, Peter E.
As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels, from the processor level to the cluster level.
Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?
NASA Technical Reports Server (NTRS)
Lum, Karen; Hihn, Jairus; Menzies, Tim
2006-01-01
While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.
Experimental Validation of an Integrated Controls-Structures Design Methodology
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Walz, Joseph E.
1996-01-01
The first experimental validation of an integrated controls-structures design methodology for a class of large order, flexible space structures is described. Integrated redesign of the controls-structures-interaction evolutionary model, a laboratory testbed at NASA Langley, was described earlier. The redesigned structure was fabricated, assembled in the laboratory, and experimentally tested against the original structure. Experimental results indicate that the structure redesigned using the integrated design methodology requires significantly less average control power than the nominal structure with control-optimized designs, while maintaining the required line-of-sight pointing performance. Thus, the superiority of the integrated design methodology over the conventional design approach is experimentally demonstrated. Furthermore, amenability of the integrated design structure to other control strategies is evaluated, both analytically and experimentally. Using Linear-Quadratic-Guassian optimal dissipative controllers, it is observed that the redesigned structure leads to significantly improved performance with alternate controllers as well.
The design of sport and touring aircraft
NASA Technical Reports Server (NTRS)
Eppler, R.; Guenther, W.
1984-01-01
General considerations concerning the design of a new aircraft are discussed, taking into account the objective to develop an aircraft can satisfy economically a certain spectrum of tasks. Requirements related to the design of sport and touring aircraft included in the past mainly a high cruising speed and short take-off and landing runs. Additional requirements for new aircraft are now low fuel consumption and optimal efficiency. A computer program for the computation of flight performance makes it possible to vary automatically a number of parameters, such as flight altitude, wing area, and wing span. The appropriate design characteristics are to a large extent determined by the selection of the flight altitude. Three different wing profiles are compared. Potential improvements with respect to the performance of the aircraft and its efficiency are related to the use of fiber composites, the employment of better propeller profiles, more efficient engines, and the utilization of suitable instrumentation for optimal flight conduction.
Requirements analysis and preliminary design of a robotic assistant for reconstructive microsurgery.
Vanthournhout, L; Herman, B; Duisit, J; Château, F; Szewczyk, J; Lengelé, B; Raucent, B
2015-08-01
Microanastomosis is a microsurgical gesture that involves suturing two very small blood vessels together. This gesture is used in many operations such as avulsed member auto-grafting, pediatric surgery, reconstructive surgery - including breast reconstruction by free flap. When vessels have diameters smaller than one millimeter, hand tremors make movements difficult to control. This paper introduces our preliminary steps towards robotic assistance for helping surgeons to perform microanastomosis in optimal conditions, in order to increase gesture quality and reliability even on smaller diameters. A general needs assessment and an experimental motion analysis were performed to define the requirements of the robot. Geometric parameters of the kinematic structure were then optimized to fulfill specific objectives. A prototype of the robot is currently being designed and built in order to providing a sufficient increase in accuracy without prolonging the duration of the procedure.
Chan, Ho Sze; de Blois, Erik; Konijnenberg, Mark W; Morgenstern, Alfred; Bruchertseifer, Frank; Norenberg, Jeffrey P; Verzijlbergen, Fred J; de Jong, Marion; Breeman, Wouter A P
2017-01-01
213 Bismuth ( 213 Bi, T 1/2 = 45.6 min) is one of the most frequently used α-emitters in cancer research. High specific activity radioligands are required for peptide receptor radionuclide therapy. The use of generators containing less than 222 MBq 225 Ac (actinium), due to limited availability and the high cost to produce large-scale 225 Ac/ 213 Bi generators, might complicate in vitro and in vivo applications though.Here we present optimized labelling conditions of a DOTA-peptide with an 225 Ac/ 213 Bi generator (< 222 MBq) for preclinical applications using DOTA-Tyr 3 -octreotate (DOTATATE), a somatostatin analogue. The following labelling conditions of DOTATATE with 213 Bi were investigated; peptide mass was varied from 1.7 to 7.0 nmol, concentration of TRIS buffer from 0.15 mol.L -1 to 0.34 mol.L -1 , and ascorbic acid from 0 to 71 mmol.L -1 in 800 μL. All reactions were performed at 95 °C for 5 min. After incubation, DTPA (50 nmol) was added to stop the labelling reaction. Besides optimizing the labelling conditions, incorporation yield was determined by ITLC-SG and radiochemical purity (RCP) was monitored by RP-HPLC up to 120 min after labelling. Dosimetry studies in the reaction vial were performed using Monte Carlo and in vitro clonogenic assay was performed with a rat pancreatic tumour cell line, CA20948. At least 3.5 nmol DOTATATE was required to obtain incorporation ≥ 99 % with 100 MBq 213 Bi (at optimized pH conditions, pH 8.3 with 0.15 mol.L -1 TRIS) in a reaction volume of 800 μL. The cumulative absorbed dose in the reaction vial was 230 Gy/100 MBq in 30 min. A minimal final concentration of 0.9 mmol.L -1 ascorbic acid was required for ~100 MBq (t = 0) to minimize radiation damage of DOTATATE. The osmolarity was decreased to 0.45 Osmol/L.Under optimized labelling conditions, 213 Bi-DOTATATE remained stable up to 2 h after labelling, RCP was ≥ 85 %. In vitro showed a negative correlation between ascorbic acid concentration and cell survival. 213 Bismuth-DOTA-peptide labelling conditions including peptide amount, quencher and pH were optimized to meet the requirements needed for preclinical applications in peptide receptor radionuclide therapy.
Multiparameter optimization of mammography: an update
NASA Astrophysics Data System (ADS)
Jafroudi, Hamid; Muntz, E. P.; Jennings, Robert J.
1994-05-01
Previously in this forum we have reported the application of multiparameter optimization techniques to the design of a minimum dose mammography system. The approach used a reference system to define the physical imaging performance required and the dose to which the dose for the optimized system should be compared. During the course of implementing the resulting design in hardware suitable for laboratory testing, the state of the art in mammographic imaging changed, so that the original reference system, which did not have a grid, was no longer appropriate. A reference system with a grid was selected in response to this change, and at the same time the optimization procedure was modified, to make it more general and to facilitate study of the optimized design under a variety of conditions. We report the changes in the procedure, and the results obtained using the revised procedure and the up- to-date reference system. Our results, which are supported by laboratory measurements, indicate that the optimized design can image small objects as well as the reference system using only about 30% of the dose required by the reference system. Hardware meeting the specification produced by the optimization procedure and suitable for clinical use is currently under evaluation in the Diagnostic Radiology Department at the Clinical Center, NH.
Active Mirror Predictive and Requirements Verification Software (AMP-ReVS)
NASA Technical Reports Server (NTRS)
Basinger, Scott A.
2012-01-01
This software is designed to predict large active mirror performance at various stages in the fabrication lifecycle of the mirror. It was developed for 1-meter class powered mirrors for astronomical purposes, but is extensible to other geometries. The package accepts finite element model (FEM) inputs and laboratory measured data for large optical-quality mirrors with active figure control. It computes phenomenological contributions to the surface figure error using several built-in optimization techniques. These phenomena include stresses induced in the mirror by the manufacturing process and the support structure, the test procedure, high spatial frequency errors introduced by the polishing process, and other process-dependent deleterious effects due to light-weighting of the mirror. Then, depending on the maturity of the mirror, it either predicts the best surface figure error that the mirror will attain, or it verifies that the requirements for the error sources have been met once the best surface figure error has been measured. The unique feature of this software is that it ties together physical phenomenology with wavefront sensing and control techniques and various optimization methods including convex optimization, Kalman filtering, and quadratic programming to both generate predictive models and to do requirements verification. This software combines three distinct disciplines: wavefront control, predictive models based on FEM, and requirements verification using measured data in a robust, reusable code that is applicable to any large optics for ground and space telescopes. The software also includes state-of-the-art wavefront control algorithms that allow closed-loop performance to be computed. It allows for quantitative trade studies to be performed for optical systems engineering, including computing the best surface figure error under various testing and operating conditions. After the mirror manufacturing process and testing have been completed, the software package can be used to verify that the underlying requirements have been met.
Kinematics and dynamics analysis of a quadruped walking robot with parallel leg mechanism
NASA Astrophysics Data System (ADS)
Wang, Hongbo; Sang, Lingfeng; Hu, Xing; Zhang, Dianfan; Yu, Hongnian
2013-09-01
It is desired to require a walking robot for the elderly and the disabled to have large capacity, high stiffness, stability, etc. However, the existing walking robots cannot achieve these requirements because of the weight-payload ratio and simple function. Therefore, Improvement of enhancing capacity and functions of the walking robot is an important research issue. According to walking requirements and combining modularization and reconfigurable ideas, a quadruped/biped reconfigurable walking robot with parallel leg mechanism is proposed. The proposed robot can be used for both a biped and a quadruped walking robot. The kinematics and performance analysis of a 3-UPU parallel mechanism which is the basic leg mechanism of a quadruped walking robot are conducted and the structural parameters are optimized. The results show that performance of the walking robot is optimal when the circumradius R, r of the upper and lower platform of leg mechanism are 161.7 mm, 57.7 mm, respectively. Based on the optimal results, the kinematics and dynamics of the quadruped walking robot in the static walking mode are derived with the application of parallel mechanism and influence coefficient theory, and the optimal coordination distribution of the dynamic load for the quadruped walking robot with over-determinate inputs is analyzed, which solves dynamic load coupling caused by the branches’ constraint of the robot in the walk process. Besides laying a theoretical foundation for development of the prototype, the kinematics and dynamics studies on the quadruped walking robot also boost the theoretical research of the quadruped walking and the practical applications of parallel mechanism.
An integrated optimum design approach for high speed prop-rotors including acoustic constraints
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Wells, Valana; Mccarthy, Thomas; Han, Arris
1993-01-01
The objective of this research is to develop optimization procedures to provide design trends in high speed prop-rotors. The necessary disciplinary couplings are all considered within a closed loop multilevel decomposition optimization process. The procedures involve the consideration of blade-aeroelastic aerodynamic performance, structural-dynamic design requirements, and acoustics. Further, since the design involves consideration of several different objective functions, multiobjective function formulation techniques are developed.
ERIC Educational Resources Information Center
Panek, Paul E.; Staats, Sara; Hiles, Amanda
2006-01-01
Two studies were conducted. In study one 100 participants rated 60 occupations on the amount of cognitive/intellectual, physical, sensory-perceptual, and perceptual-motor demands they perceived as required for successful performance in that particular occupation. Results of a cluster analysis determined four clusters of occupations on the basis of…
NASA Technical Reports Server (NTRS)
Unal, Resit
1999-01-01
Multdisciplinary design optimization (MDO) is an important step in the design and evaluation of launch vehicles, since it has a significant impact on performance and lifecycle cost. The objective in MDO is to search the design space to determine the values of design parameters that optimize the performance characteristics subject to system constraints. Vehicle Analysis Branch (VAB) at NASA Langley Research Center has computerized analysis tools in many of the disciplines required for the design and analysis of launch vehicles. Vehicle performance characteristics can be determined by the use of these computerized analysis tools. The next step is to optimize the system performance characteristics subject to multidisciplinary constraints. However, most of the complex sizing and performance evaluation codes used for launch vehicle design are stand-alone tools, operated by disciplinary experts. They are, in general, difficult to integrate and use directly for MDO. An alternative has been to utilize response surface methodology (RSM) to obtain polynomial models that approximate the functional relationships between performance characteristics and design variables. These approximation models, called response surface models, are then used to integrate the disciplines using mathematical programming methods for efficient system level design analysis, MDO and fast sensitivity simulations. A second-order response surface model of the form given has been commonly used in RSM since in many cases it can provide an adequate approximation especially if the region of interest is sufficiently limited.
An optimized and low-cost FPGA-based DNA sequence alignment--a step towards personal genomics.
Shah, Hurmat Ali; Hasan, Laiq; Ahmad, Nasir
2013-01-01
DNA sequence alignment is a cardinal process in computational biology but also is much expensive computationally when performing through traditional computational platforms like CPU. Of many off the shelf platforms explored for speeding up the computation process, FPGA stands as the best candidate due to its performance per dollar spent and performance per watt. These two advantages make FPGA as the most appropriate choice for realizing the aim of personal genomics. The previous implementation of DNA sequence alignment did not take into consideration the price of the device on which optimization was performed. This paper presents optimization over previous FPGA implementation that increases the overall speed-up achieved as well as the price incurred by the platform that was optimized. The optimizations are (1) The array of processing elements is made to run on change in input value and not on clock, so eliminating the need for tight clock synchronization, (2) the implementation is unrestrained by the size of the sequences to be aligned, (3) the waiting time required for the sequences to load to FPGA is reduced to the minimum possible and (4) an efficient method is devised to store the output matrix that make possible to save the diagonal elements to be used in next pass, in parallel with the computation of output matrix. Implemented on Spartan3 FPGA, this implementation achieved 20 times performance improvement in terms of CUPS over GPP implementation.
Optimization of entry-vehicle shapes during conceptual design
NASA Astrophysics Data System (ADS)
Dirkx, D.; Mooij, E.
2014-01-01
During the conceptual design of a re-entry vehicle, the vehicle shape and geometry can be varied and its impact on performance can be evaluated. In this study, the shape optimization of two classes of vehicles has been studied: a capsule and a winged vehicle. Their aerodynamic characteristics were analyzed using local-inclination methods, automatically selected per vehicle segment. Entry trajectories down to Mach 3 were calculated assuming trimmed conditions. For the winged vehicle, which has both a body flap and elevons, a guidance algorithm to track a reference heat-rate was used. Multi-objective particle swarm optimization was used to optimize the shape using objectives related to mass, volume and range. The optimizations show a large variation in vehicle performance over the explored parameter space. Areas of very strong non-linearity are observed in the direct neighborhood of the two-dimensional Pareto fronts. This indicates the need for robust exploration of the influence of vehicle shapes on system performance during engineering trade-offs, which are performed during conceptual design. A number of important aspects of the influence of vehicle behavior on the Pareto fronts are observed and discussed. There is a nearly complete convergence to narrow-wing solutions for the winged vehicle. Also, it is found that imposing pitch-stability for the winged vehicle at all angles of attack results in vehicle shapes which require upward control surface deflections during the majority of the entry.
Local performance optimization for a class of redundant eight-degree-of-freedom manipulators
NASA Technical Reports Server (NTRS)
Williams, Robert L., II
1994-01-01
Local performance optimization for joint limit avoidance and manipulability maximization (singularity avoidance) is obtained by using the Jacobian matrix pseudoinverse and by projecting the gradient of an objective function into the Jacobian null space. Real-time redundancy optimization control is achieved for an eight-joint redundant manipulator having a three-axis spherical shoulder, a single elbow joint, and a four-axis spherical wrist. Symbolic solutions are used for both full-Jacobian and wrist-partitioned pseudoinverses, partitioned null-space projection matrices, and all objective function gradients. A kinematic limitation of this class of manipulators and the limitation's effect on redundancy resolution are discussed. Results obtained with graphical simulation are presented to demonstrate the effectiveness of local redundant manipulator performance optimization. Actual hardware experiments performed to verify the simulated results are also discussed. A major result is that the partitioned solution is desirable because of low computation requirements. The partitioned solution is suboptimal compared with the full solution because translational and rotational terms are optimized separately; however, the results show that the difference is not significant. Singularity analysis reveals that no algorithmic singularities exist for the partitioned solution. The partitioned and full solutions share the same physical manipulator singular conditions. When compared with the full solution, the partitioned solution is shown to be ill-conditioned in smaller neighborhoods of the shared singularities.
Multi-objective optimization of GENIE Earth system models.
Price, Andrew R; Myerscough, Richard J; Voutchkov, Ivan I; Marsh, Robert; Cox, Simon J
2009-07-13
The tuning of parameters in climate models is essential to provide reliable long-term forecasts of Earth system behaviour. We apply a multi-objective optimization algorithm to the problem of parameter estimation in climate models. This optimization process involves the iterative evaluation of response surface models (RSMs), followed by the execution of multiple Earth system simulations. These computations require an infrastructure that provides high-performance computing for building and searching the RSMs and high-throughput computing for the concurrent evaluation of a large number of models. Grid computing technology is therefore essential to make this algorithm practical for members of the GENIE project.
Economic-Oriented Stochastic Optimization in Advanced Process Control of Chemical Processes
Dobos, László; Király, András; Abonyi, János
2012-01-01
Finding the optimal operating region of chemical processes is an inevitable step toward improving economic performance. Usually the optimal operating region is situated close to process constraints related to product quality or process safety requirements. Higher profit can be realized only by assuring a relatively low frequency of violation of these constraints. A multilevel stochastic optimization framework is proposed to determine the optimal setpoint values of control loops with respect to predetermined risk levels, uncertainties, and costs of violation of process constraints. The proposed framework is realized as direct search-type optimization of Monte-Carlo simulation of the controlled process. The concept is illustrated throughout by a well-known benchmark problem related to the control of a linear dynamical system and the model predictive control of a more complex nonlinear polymerization process. PMID:23213298
High Speed Civil Transport Design Using Collaborative Optimization and Approximate Models
NASA Technical Reports Server (NTRS)
Manning, Valerie Michelle
1999-01-01
The design of supersonic aircraft requires complex analysis in multiple disciplines, posing, a challenge for optimization methods. In this thesis, collaborative optimization, a design architecture developed to solve large-scale multidisciplinary design problems, is applied to the design of supersonic transport concepts. Collaborative optimization takes advantage of natural disciplinary segmentation to facilitate parallel execution of design tasks. Discipline-specific design optimization proceeds while a coordinating mechanism ensures progress toward an optimum and compatibility between disciplinary designs. Two concepts for supersonic aircraft are investigated: a conventional delta-wing design and a natural laminar flow concept that achieves improved performance by exploiting properties of supersonic flow to delay boundary layer transition. The work involves the development of aerodynamics and structural analyses, and integration within a collaborative optimization framework. It represents the most extensive application of the method to date.
Design of pilot studies to inform the construction of composite outcome measures.
Edland, Steven D; Ard, M Colin; Li, Weiwei; Jiang, Lingjing
2017-06-01
Composite scales have recently been proposed as outcome measures for clinical trials. For example, the Prodromal Alzheimer's Cognitive Composite (PACC) is the sum of z-score normed component measures assessing episodic memory, timed executive function, and global cognition. Alternative methods of calculating composite total scores using the weighted sum of the component measures that maximize signal-to-noise of the resulting composite score have been proposed. Optimal weights can be estimated from pilot data, but it is an open question how large a pilot trial is required to calculate reliably optimal weights. In this manuscript, we describe the calculation of optimal weights, and use large-scale computer simulations to investigate the question of how large a pilot study sample is required to inform the calculation of optimal weights. The simulations are informed by the pattern of decline observed in cognitively normal subjects enrolled in the Alzheimer's Disease Cooperative Study (ADCS) Prevention Instrument cohort study, restricting to n=75 subjects age 75 and over with an ApoE E4 risk allele and therefore likely to have an underlying Alzheimer neurodegenerative process. In the context of secondary prevention trials in Alzheimer's disease, and using the components of the PACC, we found that pilot studies as small as 100 are sufficient to meaningfully inform weighting parameters. Regardless of the pilot study sample size used to inform weights, the optimally weighted PACC consistently outperformed the standard PACC in terms of statistical power to detect treatment effects in a clinical trial. Pilot studies of size 300 produced weights that achieved near-optimal statistical power, and reduced required sample size relative to the standard PACC by more than half. These simulations suggest that modestly sized pilot studies, comparable to that of a phase 2 clinical trial, are sufficient to inform the construction of composite outcome measures. Although these findings apply only to the PACC in the context of prodromal AD, the observation that weights only have to approximate the optimal weights to achieve near-optimal performance should generalize. Performing a pilot study or phase 2 trial to inform the weighting of proposed composite outcome measures is highly cost-effective. The net effect of more efficient outcome measures is that smaller trials will be required to test novel treatments. Alternatively, second generation trials can use prior clinical trial data to inform weighting, so that greater efficiency can be achieved as we move forward.
A Robust Design Methodology for Optimal Microscale Secondary Flow Control in Compact Inlet Diffusers
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Keller, Dennis J.
2001-01-01
It is the purpose of this study to develop an economical Robust design methodology for microscale secondary flow control in compact inlet diffusers. To illustrate the potential of economical Robust Design methodology, two different mission strategies were considered for the subject inlet, namely Maximum Performance and Maximum HCF Life Expectancy. The Maximum Performance mission maximized total pressure recovery while the Maximum HCF Life Expectancy mission minimized the mean of the first five Fourier harmonic amplitudes, i.e., 'collectively' reduced all the harmonic 1/2 amplitudes of engine face distortion. Each of the mission strategies was subject to a low engine face distortion constraint, i.e., DC60<0.10, which is a level acceptable for commercial engines. For each of these missions strategies, an 'Optimal Robust' (open loop control) and an 'Optimal Adaptive' (closed loop control) installation was designed over a twenty degree angle-of-incidence range. The Optimal Robust installation used economical Robust Design methodology to arrive at a single design which operated over the entire angle-of-incident range (open loop control). The Optimal Adaptive installation optimized all the design parameters at each angle-of-incidence. Thus, the Optimal Adaptive installation would require a closed loop control system to sense a proper signal for each effector and modify that effector device, whether mechanical or fluidic, for optimal inlet performance. In general, the performance differences between the Optimal Adaptive and Optimal Robust installation designs were found to be marginal. This suggests, however, that Optimal Robust open loop installation designs can be very competitive with Optimal Adaptive close loop designs. Secondary flow control in inlets is inherently robust, provided it is optimally designed. Therefore, the new methodology presented in this paper, combined array 'Lower Order' approach to Robust DOE, offers the aerodynamicist a very viable and economical way of exploring the concept of Robust inlet design, where the mission variables are brought directly into the inlet design process and insensitivity or robustness to the mission variables becomes a design objective.
Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2006-01-01
Genetic and evolutionary algorithms have been applied to solve numerous problems in engineering design where they have been used primarily as optimization procedures. These methods have an advantage over conventional gradient-based search procedures became they are capable of finding global optima of multi-modal functions and searching design spaces with disjoint feasible regions. They are also robust in the presence of noisy data. Another desirable feature of these methods is that they can efficiently use distributed and parallel computing resources since multiple function evaluations (flow simulations in aerodynamics design) can be performed simultaneously and independently on ultiple processors. For these reasons genetic and evolutionary algorithms are being used more frequently in design optimization. Examples include airfoil and wing design and compressor and turbine airfoil design. They are also finding increasing use in multiple-objective and multidisciplinary optimization. This lecture will focus on an evolutionary method that is a relatively new member to the general class of evolutionary methods called differential evolution (DE). This method is easy to use and program and it requires relatively few user-specified constants. These constants are easily determined for a wide class of problems. Fine-tuning the constants will off course yield the solution to the optimization problem at hand more rapidly. DE can be efficiently implemented on parallel computers and can be used for continuous, discrete and mixed discrete/continuous optimization problems. It does not require the objective function to be continuous and is noise tolerant. DE and applications to single and multiple-objective optimization will be included in the presentation and lecture notes. A method for aerodynamic design optimization that is based on neural networks will also be included as a part of this lecture. The method offers advantages over traditional optimization methods. It is more flexible than other methods in dealing with design in the context of both steady and unsteady flows, partial and complete data sets, combined experimental and numerical data, inclusion of various constraints and rules of thumb, and other issues that characterize the aerodynamic design process. Neural networks provide a natural framework within which a succession of numerical solutions of increasing fidelity, incorporating more realistic flow physics, can be represented and utilized for optimization. Neural networks also offer an excellent framework for multiple-objective and multi-disciplinary design optimization. Simulation tools from various disciplines can be integrated within this framework and rapid trade-off studies involving one or many disciplines can be performed. The prospect of combining neural network based optimization methods and evolutionary algorithms to obtain a hybrid method with the best properties of both methods will be included in this presentation. Achieving solution diversity and accurate convergence to the exact Pareto front in multiple objective optimization usually requires a significant computational effort with evolutionary algorithms. In this lecture we will also explore the possibility of using neural networks to obtain estimates of the Pareto optimal front using non-dominated solutions generated by DE as training data. Neural network estimators have the potential advantage of reducing the number of function evaluations required to obtain solution accuracy and diversity, thus reducing cost to design.
A Method for Optimizing Non-Axisymmetric Liners for Multimodal Sound Sources
NASA Technical Reports Server (NTRS)
Watson, W. R.; Jones, M. G.; Parrott, T. L.; Sobieski, J.
2002-01-01
Central processor unit times and memory requirements for a commonly used solver are compared to that of a state-of-the-art, parallel, sparse solver. The sparse solver is then used in conjunction with three constrained optimization methodologies to assess the relative merits of non-axisymmetric versus axisymmetric liner concepts for improving liner acoustic suppression. This assessment is performed with a multimodal noise source (with equal mode amplitudes and phases) in a finite-length rectangular duct without flow. The sparse solver is found to reduce memory requirements by a factor of five and central processing time by a factor of eleven when compared with the commonly used solver. Results show that the optimum impedance of the uniform liner is dominated by the least attenuated mode, whose attenuation is maximized by the Cremer optimum impedance. An optimized, four-segmented liner with impedance segments in a checkerboard arrangement is found to be inferior to an optimized spanwise segmented liner. This optimized spanwise segmented liner is shown to attenuate substantially more sound than the optimized uniform liner and tends to be more effective at the higher frequencies. The most important result of this study is the discovery that when optimized, a spanwise segmented liner with two segments gives attenuations equal to or substantially greater than an optimized axially segmented liner with the same number of segments.
Co-optimization of lithographic and patterning processes for improved EPE performance
NASA Astrophysics Data System (ADS)
Maslow, Mark J.; Timoshkov, Vadim; Kiers, Ton; Jee, Tae Kwon; de Loijer, Peter; Morikita, Shinya; Demand, Marc; Metz, Andrew W.; Okada, Soichiro; Kumar, Kaushik A.; Biesemans, Serge; Yaegashi, Hidetami; Di Lorenzo, Paolo; Bekaert, Joost P.; Mao, Ming; Beral, Christophe; Larivière, Stephane
2017-03-01
Complimentary lithography is already being used for advanced logic patterns. The tight pitches for 1D Metal layers are expected to be created using spacer based multiple patterning ArF-i exposures and the more complex cut/block patterns are made using EUV exposures. At the same time, control requirements of CDU, pattern shift and pitch-walk are approaching sub-nanometer levels to meet edge placement error (EPE) requirements. Local variability, such as Line Edge Roughness (LER), Local CDU, and Local Placement Error (LPE), are dominant factors in the total Edge Placement error budget. In the lithography process, improving the imaging contrast when printing the core pattern has been shown to improve the local variability. In the etch process, it has been shown that the fusion of atomic level etching and deposition can also improve these local variations. Co-optimization of lithography and etch processing is expected to further improve the performance over individual optimizations alone. To meet the scaling requirements and keep process complexity to a minimum, EUV is increasingly seen as the platform for delivering the exposures for both the grating and the cut/block patterns beyond N7. In this work, we evaluated the overlay and pattern fidelity of an EUV block printed in a negative tone resist on an ArF-i SAQP grating. High-order Overlay modeling and corrections during the exposure can reduce overlay error after development, a significant component of the total EPE. During etch, additional degrees of freedom are available to improve the pattern placement error in single layer processes. Process control of advanced pitch nanoscale-multi-patterning techniques as described above is exceedingly complicated in a high volume manufacturing environment. Incorporating potential patterning optimizations into both design and HVM controls for the lithography process is expected to bring a combined benefit over individual optimizations. In this work we will show the EPE performance improvement for a 32nm pitch SAQP + block patterned Metal 2 layer by cooptimizing the lithography and etch processes. Recommendations for further improvements and alternative processes will be given.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1980-01-01
The computational techniques are described which are utilized at Lewis Research Center to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements. Cycle performance, and engine weight can be calculated along with costs and installation effects as opposed to fuel consumption alone. Almost any conceivable turbine engine cycle can be studied. These computer codes are: NNEP, WATE, LIFCYC, INSTAL, and POD DRG. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight and cost for representative types of aircraft and missions.
NASA Technical Reports Server (NTRS)
Cavage, R. L.
1975-01-01
Results are presented of a study of lift-cruise fan V/STOL aircraft for the 1980-1985 time period. Technical and operating characteristics and technology requirements for the ultimate development of this type aircraft are identified. Aircraft individually optimized to perform the antisubmarine warfare, carrier onboard delivery, combat search and rescue, and surveillance and surface attack missions are considered along with a multi-purpose aircraft concept capable of performing all five missions at minimum total program cost. It is shown that lighter and smaller aircraft could be obtained by optimizing the design and fan selection for specific missions.
Retooling CFD for hypersonic aircraft
NASA Technical Reports Server (NTRS)
Dwoyer, Douglas L.; Kutler, Paul; Povinelli, Louis A.
1987-01-01
The CFD facility requirements of hypersonic aircraft configuration design development are different from those thus far employed for reentry vehicle design, because (1) the airframe and the propulsion system must be fully integrated to achieve the desired performance; (2) the vehicle must be reusable, with minimum refurbishment requirements between flights; and (3) vehicle performance must be optimized for a wide range of Mach numbers. An evaluation is presently made of flow resolution within shock waves, transition and turbulence phenomenon tractability, chemical reaction modeling, and hypersonic boundary layer transition, with state-of-the-art CFD.
Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
1997-01-01
A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.
The Effect of Aerodynamic Evaluators on the Multi-Objective Optimization of Flatback Airfoils
NASA Astrophysics Data System (ADS)
Miller, M.; Slew, K. Lee; Matida, E.
2016-09-01
With the long lengths of today's wind turbine rotor blades, there is a need to reduce the mass, thereby requiring stiffer airfoils, while maintaining the aerodynamic efficiency of the airfoils, particularly in the inboard region of the blade where structural demands are highest. Using a genetic algorithm, the multi-objective aero-structural optimization of 30% thick flatback airfoils was systematically performed for a variety of aerodynamic evaluators such as lift-to-drag ratio (Cl/Cd), torque (Ct), and torque-to-thrust ratio (Ct/Cn) to determine their influence on airfoil shape and performance. The airfoil optimized for Ct possessed a 4.8% thick trailing-edge, and a rather blunt leading-edge region which creates high levels of lift and correspondingly, drag. It's ability to maintain similar levels of lift and drag under forced transition conditions proved it's insensitivity to roughness. The airfoil optimized for Cl/Cd displayed relatively poor insensitivity to roughness due to the rather aft-located free transition points. The Ct/Cn optimized airfoil was found to have a very similar shape to that of the Cl/Cd airfoil, with a slightly more blunt leading-edge which aided in providing higher levels of lift and moderate insensitivity to roughness. The influence of the chosen aerodynamic evaluator under the specified conditions and constraints in the optimization of wind turbine airfoils is shown to have a direct impact on the airfoil shape and performance.
Development and demonstration of an on-board mission planner for helicopters
NASA Technical Reports Server (NTRS)
Deutsch, Owen L.; Desai, Mukund
1988-01-01
Mission management tasks can be distributed within a planning hierarchy, where each level of the hierarchy addresses a scope of action, and associated time scale or planning horizon, and requirements for plan generation response time. The current work is focused on the far-field planning subproblem, with a scope and planning horizon encompassing the entire mission and with a response time required to be about two minutes. The far-feld planning problem is posed as a constrained optimization problem and algorithms and structural organizations are proposed for the solution. Algorithms are implemented in a developmental environment, and performance is assessed with respect to optimality and feasibility for the intended application and in comparison with alternative algorithms. This is done for the three major components of far-field planning: goal planning, waypoint path planning, and timeline management. It appears feasible to meet performance requirements on a 10 Mips flyable processor (dedicated to far-field planning) using a heuristically-guided simulated annealing technique for the goal planner, a modified A* search for the waypoint path planner, and a speed scheduling technique developed for this project.
Design of optimal buffer layers for CuInGaSe2 thin-film solar cells(Conference Presentation)
NASA Astrophysics Data System (ADS)
Lordi, Vincenzo; Varley, Joel B.; He, Xiaoqing; Rockett, Angus A.; Bailey, Jeff; Zapalac, Geordie H.; Mackie, Neil; Poplavskyy, Dmitry; Bayman, Atiye
2016-09-01
Optimizing the buffer layer in manufactured thin-film PV is essential to maximize device efficiency. Here, we describe a combined synthesis, characterization, and theory effort to design optimal buffers based on the (Cd,Zn)(O,S) alloy system for CIGS devices. Optimization of buffer composition and absorber/buffer interface properties in light of several competing requirements for maximum device efficiency were performed, along with process variations to control the film and interface quality. The most relevant buffer properties controlling performance include band gap, conduction band offset with absorber, dopability, interface quality, and film crystallinity. Control of an all-PVD deposition process enabled variation of buffer composition, crystallinity, doping, and quality of the absorber/buffer interface. Analytical electron microscopy was used to characterize the film composition and morphology, while hybrid density functional theory was used to predict optimal compositions and growth parameters based on computed material properties. Process variations were developed to produce layers with controlled crystallinity, varying from amorphous to fully epitaxial, depending primarily on oxygen content. Elemental intermixing between buffer and absorber, particularly involving Cd and Cu, also is controlled and significantly affects device performance. Secondary phase formation at the interface is observed for some conditions and may be detrimental depending on the morphology. Theoretical calculations suggest optimal composition ranges for the buffer based on a suite of computed properties and drive process optimizations connected with observed film properties. Prepared by LLNL under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Sohn, Jung Woo; Jeon, Juncheol; Nguyen, Quoc Hung; Choi, Seung-Bok
2015-08-01
In this paper, a disc-type magneto-rheological (MR) brake is designed for a mid-sized motorcycle and its performance is experimentally evaluated. The proposed MR brake consists of an outer housing, a rotating disc immersed in MR fluid, and a copper wire coiled around a bobbin to generate a magnetic field. The structural configuration of the MR brake is first presented with consideration of the installation space for the conventional hydraulic brake of a mid-sized motorcycle. The design parameters of the proposed MR brake are optimized to satisfy design requirements such as the braking torque, total mass of the MR brake, and cruising temperature caused by the magnetic-field friction of the MR fluid. In the optimization procedure, the braking torque is calculated based on the Herschel-Bulkley rheological model, which predicts MR fluid behavior well at high shear rate. An optimization tool based on finite element analysis is used to obtain the optimized dimensions of the MR brake. After manufacturing the MR brake, mechanical performances regarding the response time, braking torque and cruising temperature are experimentally evaluated.
PSO-tuned PID controller for coupled tank system via priority-based fitness scheme
NASA Astrophysics Data System (ADS)
Jaafar, Hazriq Izzuan; Hussien, Sharifah Yuslinda Syed; Selamat, Nur Asmiza; Abidin, Amar Faiz Zainal; Aras, Mohd Shahrieel Mohd; Nasir, Mohamad Na'im Mohd; Bohari, Zul Hasrizal
2015-05-01
The industrial applications of Coupled Tank System (CTS) are widely used especially in chemical process industries. The overall process is require liquids to be pumped, stored in the tank and pumped again to another tank. Nevertheless, the level of liquid in tank need to be controlled and flow between two tanks must be regulated. This paper presents development of an optimal PID controller for controlling the desired liquid level of the CTS. Two method of Particle Swarm Optimization (PSO) algorithm will be tested in optimizing the PID controller parameters. These two methods of PSO are standard Particle Swarm Optimization (PSO) and Priority-based Fitness Scheme in Particle Swarm Optimization (PFPSO). Simulation is conducted within Matlab environment to verify the performance of the system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). It has been demonstrated that implementation of PSO via Priority-based Fitness Scheme (PFPSO) for this system is potential technique to control the desired liquid level and improve the system performances compared with standard PSO.
SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models
NASA Astrophysics Data System (ADS)
Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.
2013-12-01
Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes. We will conclude with a discussion of planned extensions to SpF that will provide pseudospectral applications with additional flexibility with regard to time integration, linear solvers, and discretization in the radial direction.
Design Tool Using a New Optimization Method Based on a Stochastic Process
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.
Structural optimization with approximate sensitivities
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.
1994-01-01
Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.
Power optimization of ultrasonic friction-modulation tactile interfaces.
Wiertlewski, Michael; Colgate, J Edward
2015-01-01
Ultrasonic friction-modulation devices provide rich tactile sensation on flat surfaces and have the potential to restore tangibility to touchscreens. To date, their adoption into consumer electronics has been in part limited by relatively high power consumption, incompatible with the requirements of battery-powered devices. This paper introduces a method that optimizes the energy efficiency and performance of this class of devices. It considers optimal energy transfer to the impedance provided by the finger interacting with the surface. Constitutive equations are determined from the mode shape of the interface and the piezoelectric coupling of the actuator. The optimization procedure employs a lumped parameter model to simplify the treatment of the problem. Examples and an experimental study show the evolution of the optimal design as a function of the impedance of the finger.
Practical layer designs for polarizing beam-splitter cubes.
von Blanckenhagen, Bernhard
2006-03-01
Liquid-crystal-on-silicon- (LCoS-) based digital projection systems require high-performance polarizing beam splitters. The classical beam-splitter cube with an immersed interference coating can fulfill these requirements. Practical layer designs can be generated by computer optimization using the classic MacNeille polarizer layer design as the starting layer design. Multilayer structures with 100 nm bandwidth covering the blue, green, or red spectral region and one design covering the whole visible spectral region are designed. In a second step these designs are realized by using plasma-ion-assisted deposition. The performance of the practical beam-splitter cubes is compared with the theoretical performance of the layer designs.
A Fast Proceduere for Optimizing Thermal Protection Systems of Re-Entry Vehicles
NASA Astrophysics Data System (ADS)
Ferraiuolo, M.; Riccio, A.; Tescione, D.; Gigliotti, M.
The aim of the present work is to introduce a fast procedure to optimize thermal protection systems for re-entry vehicles subjected to high thermal loads. A simplified one-dimensional optimization process, performed in order to find the optimum design variables (lengths, sections etc.), is the first step of the proposed design procedure. Simultaneously, the most suitable materials able to sustain high temperatures and meeting the weight requirements are selected and positioned within the design layout. In this stage of the design procedure, simplified (generalized plane strain) FEM models are used when boundary and geometrical conditions allow the reduction of the degrees of freedom. Those simplified local FEM models can be useful because they are time-saving and very simple to build; they are essentially one dimensional and can be used for optimization processes in order to determine the optimum configuration with regard to weight, temperature and stresses. A triple-layer and a double-layer body, subjected to the same aero-thermal loads, have been optimized to minimize the overall weight. Full two and three-dimensional analyses are performed in order to validate those simplified models. Thermal-structural analyses and optimizations are executed by adopting the Ansys FEM code.
Reduced state feedback gain computation. [optimization and control theory for aircraft control
NASA Technical Reports Server (NTRS)
Kaufman, H.
1976-01-01
Because application of conventional optimal linear regulator theory to flight controller design requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. Therefore, a stochastic linear model that was developed is presented which accounts for aircraft parameter and initial uncertainty, measurement noise, turbulence, pilot command and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell. Results using a seventh order process show the proposed procedures to be very effective.
Sakurada, Takeshi; Nakajima, Takeshi; Morita, Mitsuya; Hirai, Masahiro; Watanabe, Eiju
2017-01-01
It is believed that motor performance improves when individuals direct attention to movement outcome (external focus, EF) rather than to body movement itself (internal focus, IF). However, our previous study found that an optimal individual attentional strategy depended on motor imagery ability. We explored whether the individual motor imagery ability in stroke patients also affected the optimal attentional strategy for motor control. Individual motor imagery ability was determined as either kinesthetic- or visual-dominant by a questionnaire in 28 patients and 28 healthy-controls. Participants then performed a visuomotor task that required tracing a trajectory under three attentional conditions: no instruction (NI), attention to hand movement (IF), or attention to cursor movement (EF). Movement error in the stroke group strongly depended on individual modality dominance of motor imagery. Patients with kinesthetic dominance showed higher motor accuracy under the IF condition but with concomitantly lower velocity. Alternatively, patients with visual dominance showed improvements in both speed and accuracy under the EF condition. These results suggest that the optimal attentional strategy for improving motor accuracy in stroke rehabilitation differs according to the individual dominance of motor imagery. Our findings may contribute to the development of tailor-made pre-assessment and rehabilitation programs optimized for individual cognitive abilities. PMID:28094320
Optimization of a reversible hood for protecting a pedestrian's head during car collisions.
Huang, Sunan; Yang, Jikuang
2010-07-01
This study evaluated and optimized the performance of a reversible hood (RH) for the prevention of the head injuries of an adult pedestrian from car collisions. The FE model of a production car front was introduced and validated. The baseline RH was developed from the original hood in the validated car front model. In order to evaluate the protective performance of the baseline RH, the FE models of an adult headform and a 50th percentile human head were used in parallel to impact the baseline RH. Based on the evaluation, the response surface method was applied to optimize the RH in terms of the material stiffness, lifting speed, and lifted height. Finally, the headform model and the human head model were again used to evaluate the protective performance of the optimized RH. It was found that the lifted baseline RH can obviously reduce the impact responses of the headform model and the human head model by comparing with the retracted and lifting baseline RH. When the optimized RH was lifted, the HIC values of the headform model and the human head model were further reduced to much lower than 1000. The risk of pedestrian head injuries can be prevented as required by EEVC WG17. Copyright 2009 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao Daliang; Earl, Matthew A.; Luan, Shuang
2006-04-15
A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less
Chung, Jongsuk; Son, Dae-Soon; Jeon, Hyo-Jeong; Kim, Kyoung-Mee; Park, Gahee; Ryu, Gyu Ha; Park, Woong-Yang; Park, Donghyun
2016-01-01
Targeted capture massively parallel sequencing is increasingly being used in clinical settings, and as costs continue to decline, use of this technology may become routine in health care. However, a limited amount of tissue has often been a challenge in meeting quality requirements. To offer a practical guideline for the minimum amount of input DNA for targeted sequencing, we optimized and evaluated the performance of targeted sequencing depending on the input DNA amount. First, using various amounts of input DNA, we compared commercially available library construction kits and selected Agilent’s SureSelect-XT and KAPA Biosystems’ Hyper Prep kits as the kits most compatible with targeted deep sequencing using Agilent’s SureSelect custom capture. Then, we optimized the adapter ligation conditions of the Hyper Prep kit to improve library construction efficiency and adapted multiplexed hybrid selection to reduce the cost of sequencing. In this study, we systematically evaluated the performance of the optimized protocol depending on the amount of input DNA, ranging from 6.25 to 200 ng, suggesting the minimal input DNA amounts based on coverage depths required for specific applications. PMID:27220682
Gardiner, James; Bari, Abu Zeeshan; Kenney, Laurence; Twiste, Martin; Moser, David; Zahedi, Saeed; Howard, David
2017-12-01
Current energy storage and return prosthetic feet only marginally reduce the cost of amputee locomotion compared with basic solid ankle cushioned heel feet, possibly due to their lack of push-off at the end of stance. To the best of our knowledge, a prosthetic ankle that utilizes a hydraulic variable displacement actuator (VDA) to improve push-off performance has not previously been proposed. Therefore, here we report a design optimization and simulation feasibility study for a VDA-based prosthetic ankle. The proposed device stores the eccentric ankle work done from heel strike to maximum dorsiflexion in a hydraulic accumulator and then returns the stored energy to power push-off. Optimization was used to establish the best spring characteristic and gear ratio between ankle and VDA. The corresponding simulations show that, in level walking, normal push-off is achieved and, per gait cycle, the energy stored in the accumulator increases by 22% of the requirements for normal push-off. Although the results are promising, there are many unanswered questions and, for this approach to be a success, a new miniature, low-losses, and lightweight VDA would be required that is half the size of the smallest commercially available device.
Optimization algorithms for large-scale multireservoir hydropower systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiew, K.L.
Five optimization algorithms were vigorously evaluated based on applications on a hypothetical five-reservoir hydropower system. These algorithms are incremental dynamic programming (IDP), successive linear programing (SLP), feasible direction method (FDM), optimal control theory (OCT) and objective-space dynamic programming (OSDP). The performance of these algorithms were comparatively evaluated using unbiased, objective criteria which include accuracy of results, rate of convergence, smoothness of resulting storage and release trajectories, computer time and memory requirements, robustness and other pertinent secondary considerations. Results have shown that all the algorithms, with the exception of OSDP converge to optimum objective values within 1.0% difference from one another.more » The highest objective value is obtained by IDP, followed closely by OCT. Computer time required by these algorithms, however, differ by more than two orders of magnitude, ranging from 10 seconds in the case of OCT to a maximum of about 2000 seconds for IDP. With a well-designed penalty scheme to deal with state-space constraints, OCT proves to be the most-efficient algorithm based on its overall performance. SLP, FDM, and OCT were applied to the case study of Mahaweli project, a ten-powerplant system in Sri Lanka.« less
NASA Technical Reports Server (NTRS)
Weber, Gary A.
1991-01-01
During the 90-day study, support was provided to NASA in defining a point-of-departure space transfer vehicle (STV). The resulting STV concept was performance optimized with a two-stage LTV/LEV configuration. Appendix A reports on the effort during this period of the study. From the end of the 90-day study until the March Interim Review, effort was placed on optimizing the two-stage vehicle approach identified in the 90-day effort. After the March Interim Review, the effort was expanded to perform a full architectural trade study with the intent of developing a decision database to support STV system decisions in response to changing SEI infrastructure concepts. Several of the architecture trade studies were combined in a System Architecture Trade Study. In addition to this trade, system optimization/definition trades and analyses were completed and some special topics were addressed. Program- and system-level trade study and analyses methodologies and results are presented in this section. Trades and analyses covered in this section are: (1) a system architecture trade study; (2) evolution; (3) safety and abort considerations; (4) STV as a launch vehicle upper stage; and (5) optimum crew and cargo split.
A Pairwise Naïve Bayes Approach to Bayesian Classification.
Asafu-Adjei, Josephine K; Betensky, Rebecca A
2015-10-01
Despite the relatively high accuracy of the naïve Bayes (NB) classifier, there may be several instances where it is not optimal, i.e. does not have the same classification performance as the Bayes classifier utilizing the joint distribution of the examined attributes. However, the Bayes classifier can be computationally intractable due to its required knowledge of the joint distribution. Therefore, we introduce a "pairwise naïve" Bayes (PNB) classifier that incorporates all pairwise relationships among the examined attributes, but does not require specification of the joint distribution. In this paper, we first describe the necessary and sufficient conditions under which the PNB classifier is optimal. We then discuss sufficient conditions for which the PNB classifier, and not NB, is optimal for normal attributes. Through simulation and actual studies, we evaluate the performance of our proposed classifier relative to the Bayes and NB classifiers, along with the HNB, AODE, LBR and TAN classifiers, using normal density and empirical estimation methods. Our applications show that the PNB classifier using normal density estimation yields the highest accuracy for data sets containing continuous attributes. We conclude that it offers a useful compromise between the Bayes and NB classifiers.
Rapid indirect trajectory optimization on highly parallel computing architectures
NASA Astrophysics Data System (ADS)
Antony, Thomas
Trajectory optimization is a field which can benefit greatly from the advantages offered by parallel computing. The current state-of-the-art in trajectory optimization focuses on the use of direct optimization methods, such as the pseudo-spectral method. These methods are favored due to their ease of implementation and large convergence regions while indirect methods have largely been ignored in the literature in the past decade except for specific applications in astrodynamics. It has been shown that the shortcomings conventionally associated with indirect methods can be overcome by the use of a continuation method in which complex trajectory solutions are obtained by solving a sequence of progressively difficult optimization problems. High performance computing hardware is trending towards more parallel architectures as opposed to powerful single-core processors. Graphics Processing Units (GPU), which were originally developed for 3D graphics rendering have gained popularity in the past decade as high-performance, programmable parallel processors. The Compute Unified Device Architecture (CUDA) framework, a parallel computing architecture and programming model developed by NVIDIA, is one of the most widely used platforms in GPU computing. GPUs have been applied to a wide range of fields that require the solution of complex, computationally demanding problems. A GPU-accelerated indirect trajectory optimization methodology which uses the multiple shooting method and continuation is developed using the CUDA platform. The various algorithmic optimizations used to exploit the parallelism inherent in the indirect shooting method are described. The resulting rapid optimal control framework enables the construction of high quality optimal trajectories that satisfy problem-specific constraints and fully satisfy the necessary conditions of optimality. The benefits of the framework are highlighted by construction of maximum terminal velocity trajectories for a hypothetical long range weapon system. The techniques used to construct an initial guess from an analytic near-ballistic trajectory and the methods used to formulate the necessary conditions of optimality in a manner that is transparent to the designer are discussed. Various hypothetical mission scenarios that enforce different combinations of initial, terminal, interior point and path constraints demonstrate the rapid construction of complex trajectories without requiring any a-priori insight into the structure of the solutions. Trajectory problems of this kind were previously considered impractical to solve using indirect methods. The performance of the GPU-accelerated solver is found to be 2x--4x faster than MATLAB's bvp4c, even while running on GPU hardware that is five years behind the state-of-the-art.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.
Flight Control Development for the ARH-70 Armed Reconnaissance Helicopter Program
NASA Technical Reports Server (NTRS)
Christensen, Kevin T.; Campbell, Kip G.; Griffith, Carl D.; Ivler, Christina M.; Tischler, Mark B.; Harding, Jeffrey W.
2008-01-01
In July 2005, Bell Helicopter won the U.S. Army's Armed Reconnaissance Helicopter competition to produce a replacement for the OH-58 Kiowa Warrior capable of performing the armed reconnaissance mission. To meet the U.S. Army requirement that the ARH-70A have Level 1 handling qualities for the scout rotorcraft mission task elements defined by ADS-33E-PRF, Bell equipped the aircraft with their generic automatic flight control system (AFCS). Under the constraints of the tight ARH-70A schedule, the development team used modem parameter identification and control law optimization techniques to optimize the AFCS gains to simultaneously meet multiple handling qualities design criteria. This paper will show how linear modeling, control law optimization, and simulation have been used to produce a Level 1 scout rotorcraft for the U.S. Army, while minimizing the amount of flight testing required for AFCS development and handling qualities evaluation of the ARH-70A.
Aerodynamic optimization by simultaneously updating flow variables and design parameters
NASA Technical Reports Server (NTRS)
Rizk, M. H.
1990-01-01
The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.
NASA Astrophysics Data System (ADS)
Chen, Jing-Bo
2014-06-01
By using low-frequency components of the damped wavefield, Laplace-Fourier-domain full waveform inversion (FWI) can recover a long-wavelength velocity model from the original undamped seismic data lacking low-frequency information. Laplace-Fourier-domain modelling is an important foundation of Laplace-Fourier-domain FWI. Based on the numerical phase velocity and the numerical attenuation propagation velocity, a method for performing Laplace-Fourier-domain numerical dispersion analysis is developed in this paper. This method is applied to an average-derivative optimal scheme. The results show that within the relative error of 1 per cent, the Laplace-Fourier-domain average-derivative optimal scheme requires seven gridpoints per smallest wavelength and smallest pseudo-wavelength for both equal and unequal directional sampling intervals. In contrast, the classical five-point scheme requires 23 gridpoints per smallest wavelength and smallest pseudo-wavelength to achieve the same accuracy. Numerical experiments demonstrate the theoretical analysis.
CFD research, parallel computation and aerodynamic optimization
NASA Technical Reports Server (NTRS)
Ryan, James S.
1995-01-01
Over five years of research in Computational Fluid Dynamics and its applications are covered in this report. Using CFD as an established tool, aerodynamic optimization on parallel architectures is explored. The objective of this work is to provide better tools to vehicle designers. Submarine design requires accurate force and moment calculations in flow with thick boundary layers and large separated vortices. Low noise production is critical, so flow into the propulsor region must be predicted accurately. The High Speed Civil Transport (HSCT) has been the subject of recent work. This vehicle is to be a passenger vehicle with the capability of cutting overseas flight times by more than half. A successful design must surpass the performance of comparable planes. Fuel economy, other operational costs, environmental impact, and range must all be improved substantially. For all these reasons, improved design tools are required, and these tools must eventually integrate optimization, external aerodynamics, propulsion, structures, heat transfer and other disciplines.
Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation
NASA Astrophysics Data System (ADS)
Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.
2017-06-01
Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.
A Technical Survey on Optimization of Processing Geo Distributed Data
NASA Astrophysics Data System (ADS)
Naga Malleswari, T. Y. J.; Ushasukhanya, S.; Nithyakalyani, A.; Girija, S.
2018-04-01
With growing cloud services and technology, there is growth in some geographically distributed data centers to store large amounts of data. Analysis of geo-distributed data is required in various services for data processing, storage of essential information, etc., processing this geo-distributed data and performing analytics on this data is a challenging task. The distributed data processing is accompanied by issues in storage, computation and communication. The key issues to be dealt with are time efficiency, cost minimization, utility maximization. This paper describes various optimization methods like end-to-end multiphase, G-MR, etc., using the techniques like Map-Reduce, CDS (Community Detection based Scheduling), ROUT, Workload-Aware Scheduling, SAGE, AMP (Ant Colony Optimization) to handle these issues. In this paper various optimization methods and techniques used are analyzed. It has been observed that end-to end multiphase achieves time efficiency; Cost minimization concentrates to achieve Quality of Service, Computation and reduction of Communication cost. SAGE achieves performance improvisation in processing geo-distributed data sets.
NASA Astrophysics Data System (ADS)
Aranza, M. F.; Kustija, J.; Trisno, B.; Hakim, D. L.
2016-04-01
PID Controller (Proportional Integral Derivative) was invented since 1910, but till today still is used in industries, even though there are many kind of modern controllers like fuzz controller and neural network controller are being developed. Performance of PID controller is depend on on Proportional Gain (Kp), Integral Gain (Ki) and Derivative Gain (Kd). These gains can be got by using method Ziegler-Nichols (ZN), gain-phase margin, Root Locus, Minimum Variance dan Gain Scheduling however these methods are not optimal to control systems that nonlinear and have high-orde, in addition, some methods relative hard. To solve those obstacles, particle swarm optimization (PSO) algorithm is proposed to get optimal Kp, Ki and Kd. PSO is proposed because PSO has convergent result and not require many iterations. On this research, PID controller is applied on AVR (Automatic Voltage Regulator). Based on result of analyzing transient, stability Root Locus and frequency response, performance of PID controller is better than Ziegler-Nichols.
Under-Track CFD-Based Shape Optimization for a Low-Boom Demonstrator Concept
NASA Technical Reports Server (NTRS)
Wintzer, Mathias; Ordaz, Irian; Fenbert, James W.
2015-01-01
The detailed outer mold line shaping of a Mach 1.6, demonstrator-sized low-boom concept is presented. Cruise trim is incorporated a priori as part of the shaping objective, using an equivalent-area-based approach. Design work is performed using a gradient-driven optimization framework that incorporates a three-dimensional, nonlinear flow solver, a parametric geometry modeler, and sensitivities derived using the adjoint method. The shaping effort is focused on reducing the under-track sonic boom level using an inverse design approach, while simultaneously satisfying the trim requirement. Conceptual-level geometric constraints are incorporated in the optimization process, including the internal layout of fuel tanks, landing gear, engine, and crew station. Details of the model parameterization and design process are documented for both flow-through and powered states, and the performance of these optimized vehicles presented in terms of inviscid L/D, trim state, pressures in the near-field and at the ground, and predicted sonic boom loudness.
Research on the optimal structure configuration of dither RLG used in skewed redundant INS
NASA Astrophysics Data System (ADS)
Gao, Chunfeng; Wang, Qi; Wei, Guo; Long, Xingwu
2016-05-01
The actual combat effectiveness of weapon equipment is restricted by the performance of Inertial Navigation System (INS), especially in high reliability required situations such as fighter, satellite and submarine. Through the use of skewed sensor geometries, redundant technique has been applied to reduce the cost and improve the reliability of the INS. In this paper, the structure configuration and the inertial sensor characteristics of Skewed Redundant Strapdown Inertial Navigation System (SRSINS) using dithered Ring Laser Gyroscope (RLG) are analyzed. For the dither coupling effects of the dither gyro, the system measurement errors can be amplified either the individual gyro dither frequency is near one another or the structure of the SRSINS is unreasonable. Based on the characteristics of RLG, the research on coupled vibration of dithered RLG in SRSINS is carried out. On the principle of optimal navigation performance, optimal reliability and optimal cost-effectiveness, the comprehensive evaluation scheme of the inertial sensor configuration of SRINS is given.
Multijunction Solar Cell Technology for Mars Surface Applications
NASA Technical Reports Server (NTRS)
Stella, Paul M.; Mardesich, Nick; Ewell, Richard C.; Mueller, Robert L.; Endicter, Scott; Aiken, Daniel; Edmondson, Kenneth; Fetze, Chris
2006-01-01
Solar cells used for Mars surface applications have been commercial space qualified AM0 optimized devices. Due to the Martian atmosphere, these cells are not optimized for the Mars surface and as a result operate at a reduced efficiency. A multi-year program, MOST (Mars Optimized Solar Cell Technology), managed by JPL and funded by NASA Code S, was initiated in 2004, to develop tools to modify commercial AM0 cells for the Mars surface solar spectrum and to fabricate Mars optimized devices for verification. This effort required defining the surface incident spectrum, developing an appropriate laboratory solar simulator measurement capability, and to develop and test commercial cells modified for the Mars surface spectrum. This paper discusses the program, including results for the initial modified cells. Simulated Mars surface measurements of MER cells and Phoenix Lander cells (2007 launch) are provided to characterize the performance loss for those missions. In addition, the performance of the MER rover solar arrays is updated to reflect their more than two (2) year operation.
Wet cooling towers: rule-of-thumb design and simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leeper, Stephen A.
1981-07-01
A survey of wet cooling tower literature was performed to develop a simplified method of cooling tower design and simulation for use in power plant cycle optimization. The theory of heat exchange in wet cooling towers is briefly summarized. The Merkel equation (the fundamental equation of heat transfer in wet cooling towers) is presented and discussed. The cooling tower fill constant (Ka) is defined and values derived. A rule-of-thumb method for the optimized design of cooling towers is presented. The rule-of-thumb design method provides information useful in power plant cycle optimization, including tower dimensions, water consumption rate, exit air temperature,more » power requirements and construction cost. In addition, a method for simulation of cooling tower performance at various operating conditions is presented. This information is also useful in power plant cycle evaluation. Using the information presented, it will be possible to incorporate wet cooling tower design and simulation into a procedure to evaluate and optimize power plant cycles.« less
Swarm intelligence for multi-objective optimization of synthesis gas production
NASA Astrophysics Data System (ADS)
Ganesan, T.; Vasant, P.; Elamvazuthi, I.; Ku Shaari, Ku Zilati
2012-11-01
In the chemical industry, the production of methanol, ammonia, hydrogen and higher hydrocarbons require synthesis gas (or syn gas). The main three syn gas production methods are carbon dioxide reforming (CRM), steam reforming (SRM) and partial-oxidation of methane (POM). In this work, multi-objective (MO) optimization of the combined CRM and POM was carried out. The empirical model and the MO problem formulation for this combined process were obtained from previous works. The central objectives considered in this problem are methane conversion, carbon monoxide selectivity and the hydrogen to carbon monoxide ratio. The MO nature of the problem was tackled using the Normal Boundary Intersection (NBI) method. Two techniques (Gravitational Search Algorithm (GSA) and Particle Swarm Optimization (PSO)) were then applied in conjunction with the NBI method. The performance of the two algorithms and the quality of the solutions were gauged by using two performance metrics. Comparative studies and results analysis were then carried out on the optimization results.
Evans, Steven T; Stewart, Kevin D; Afdahl, Chris; Patel, Rohan; Newell, Kelcy J
2017-07-14
In this paper, we discuss the optimization and implementation of a high throughput process development (HTPD) tool that utilizes commercially available micro-liter sized column technology for the purification of multiple clinically significant monoclonal antibodies. Chromatographic profiles generated using this optimized tool are shown to overlay with comparable profiles from the conventional bench-scale and clinical manufacturing scale. Further, all product quality attributes measured are comparable across scales for the mAb purifications. In addition to supporting chromatography process development efforts (e.g., optimization screening), comparable product quality results at all scales makes this tool is an appropriate scale model to enable purification and product quality comparisons of HTPD bioreactors conditions. The ability to perform up to 8 chromatography purifications in parallel with reduced material requirements per run creates opportunities for gathering more process knowledge in less time. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Sun, Xiaobo; Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng; Qin, Zhaohui S
2018-06-01
Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools. Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.
Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng
2018-01-01
Abstract Background Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. Findings In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)–based high-performance computing (HPC) implementation, and the popular VCFTools. Conclusions Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems. PMID:29762754
Meta-heuristic algorithms as tools for hydrological science
NASA Astrophysics Data System (ADS)
Yoo, Do Guen; Kim, Joong Hoon
2014-12-01
In this paper, meta-heuristic optimization techniques are introduced and their applications to water resources engineering, particularly in hydrological science are introduced. In recent years, meta-heuristic optimization techniques have been introduced that can overcome the problems inherent in iterative simulations. These methods are able to find good solutions and require limited computation time and memory use without requiring complex derivatives. Simulation-based meta-heuristic methods such as Genetic algorithms (GAs) and Harmony Search (HS) have powerful searching abilities, which can occasionally overcome the several drawbacks of traditional mathematical methods. For example, HS algorithms can be conceptualized from a musical performance process and used to achieve better harmony; such optimization algorithms seek a near global optimum determined by the value of an objective function, providing a more robust determination of musical performance than can be achieved through typical aesthetic estimation. In this paper, meta-heuristic algorithms and their applications (focus on GAs and HS) in hydrological science are discussed by subject, including a review of existing literature in the field. Then, recent trends in optimization are presented and a relatively new technique such as Smallest Small World Cellular Harmony Search (SSWCHS) is briefly introduced, with a summary of promising results obtained in previous studies. As a result, previous studies have demonstrated that meta-heuristic algorithms are effective tools for the development of hydrological models and the management of water resources.
Aghdasi, Nava; Whipple, Mark; Humphreys, Ian M; Moe, Kris S; Hannaford, Blake; Bly, Randall A
2018-06-01
Successful multidisciplinary treatment of skull base pathology requires precise preoperative planning. Current surgical approach (pathway) selection for these complex procedures depends on an individual surgeon's experiences and background training. Because of anatomical variation in both normal tissue and pathology (eg, tumor), a successful surgical pathway used on one patient is not necessarily the best approach on another patient. The question is how to define and obtain optimized patient-specific surgical approach pathways? In this article, we demonstrate that the surgeon's knowledge and decision making in preoperative planning can be modeled by a multiobjective cost function in a retrospective analysis of actual complex skull base cases. Two different approaches- weighted-sum approach and Pareto optimality-were used with a defined cost function to derive optimized surgical pathways based on preoperative computed tomography (CT) scans and manually designated pathology. With the first method, surgeon's preferences were input as a set of weights for each objective before the search. In the second approach, the surgeon's preferences were used to select a surgical pathway from the computed Pareto optimal set. Using preoperative CT and magnetic resonance imaging, the patient-specific surgical pathways derived by these methods were similar (85% agreement) to the actual approaches performed on patients. In one case where the actual surgical approach was different, revision surgery was required and was performed utilizing the computationally derived approach pathway.
Statistical simplex approach to primary and secondary color correction in thick lens assemblies
NASA Astrophysics Data System (ADS)
Ament, Shelby D. V.; Pfisterer, Richard
2017-11-01
A glass selection optimization algorithm is developed for primary and secondary color correction in thick lens systems. The approach is based on the downhill simplex method, and requires manipulation of the surface color equations to obtain a single glass-dependent parameter for each lens element. Linear correlation is used to relate this parameter to all other glass-dependent variables. The algorithm provides a statistical distribution of Abbe numbers for each element in the system. Examples of several lenses, from 2-element to 6-element systems, are performed to verify this approach. The optimization algorithm proposed is capable of finding glass solutions with high color correction without requiring an exhaustive search of the glass catalog.
Computer-aided design analysis of 57-mm, angular-contact, cryogenic turbopump bearings
NASA Technical Reports Server (NTRS)
Armstrong, Elizabeth S.; Coe, Harold H.
1988-01-01
The Space Shuttle main engine high-pressure oxygen turbopumps have not experienced the sevice life required of them. This insufficiency has been due in part to the shortened life of the bearings. To improve the life of the existing turbopump bearings, an effort is under way to investigate bearing modifications that could be retrofitted into the present bearing cavity. Several bearing parameters were optimized using the computer program SHABERTH, which performs a thermomechanical simulation of a load support system. The computer analysis showed that improved bearing performance is feasible if low friction coefficients can be attained. Bearing geometries were optimized considering heat generation, equilibrium temperatures, and relative life. Thermal gradients through the bearings were found to be lower with liquid lubrication than with solid film lubrication, and a liquid oxygen coolant flowrate of approximately 4.0 kg/s was found to be optimal. This paper describes the analytical modeling used to determine these feasible modifications to improve bearing performance.
Aerostructural analysis and design optimization of composite aircraft
NASA Astrophysics Data System (ADS)
Kennedy, Graeme James
High-performance composite materials exhibit both anisotropic strength and stiffness properties. These anisotropic properties can be used to produce highly-tailored aircraft structures that meet stringent performance requirements, but these properties also present unique challenges for analysis and design. New tools and techniques are developed to address some of these important challenges. A homogenization-based theory for beams is developed to accurately predict the through-thickness stress and strain distribution in thick composite beams. Numerical comparisons demonstrate that the proposed beam theory can be used to obtain highly accurate results in up to three orders of magnitude less computational time than three-dimensional calculations. Due to the large finite-element model requirements for thin composite structures used in aerospace applications, parallel solution methods are explored. A parallel direct Schur factorization method is developed. The parallel scalability of the direct Schur approach is demonstrated for a large finite-element problem with over 5 million unknowns. In order to address manufacturing design requirements, a novel laminate parametrization technique is presented that takes into account the discrete nature of the ply-angle variables, and ply-contiguity constraints. This parametrization technique is demonstrated on a series of structural optimization problems including compliance minimization of a plate, buckling design of a stiffened panel and layup design of a full aircraft wing. The design and analysis of composite structures for aircraft is not a stand-alone problem and cannot be performed without multidisciplinary considerations. A gradient-based aerostructural design optimization framework is presented that partitions the disciplines into distinct process groups. An approximate Newton-Krylov method is shown to be an efficient aerostructural solution algorithm and excellent parallel scalability of the algorithm is demonstrated. An induced drag optimization study is performed to compare the trade-off between wing weight and induced drag for wing tip extensions, raked wing tips and winglets. The results demonstrate that it is possible to achieve a 43% induced drag reduction with no weight penalty, a 28% induced drag reduction with a 10% wing weight reduction, or a 20% wing weight reduction with a 5% induced drag penalty from a baseline wing obtained from a structural mass-minimization problem with fixed aerodynamic loads.
Numerical Device Modeling, Analysis, and Optimization of Extended-SWIR HgCdTe Infrared Detectors
NASA Astrophysics Data System (ADS)
Schuster, J.; DeWames, R. E.; DeCuir, E. A.; Bellotti, E.; Dhar, N.; Wijewarnasuriya, P. S.
2016-09-01
Imaging in the extended short-wavelength infrared (eSWIR) spectral band (1.7-3.0 μm) for astronomy applications is an area of significant interest. However, these applications require infrared detectors with extremely low dark current (less than 0.01 electrons per pixel per second for certain applications). In these detectors, sources of dark current that may limit the overall system performance are fundamental and/or defect-related mechanisms. Non-optimized growth/device processing may present material point defects within the HgCdTe bandgap leading to Shockley-Read-Hall dominated dark current. While realizing contributions to the dark current from only fundamental mechanisms should be the goal for attaining optimal device performance, it may not be readily feasible with current technology and/or resources. In this regard, the U.S. Army Research Laboratory performed physics-based, two- and three-dimensional numerical modeling of HgCdTe photovoltaic infrared detectors designed for operation in the eSWIR spectral band. The underlying impetus for this capability and study originates with a desire to reach fundamental performance limits via intelligent device design.
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.
1990-01-01
Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).
An Enhanced Memetic Algorithm for Single-Objective Bilevel Optimization Problems.
Islam, Md Monjurul; Singh, Hemant Kumar; Ray, Tapabrata; Sinha, Ankur
2017-01-01
Bilevel optimization, as the name reflects, deals with optimization at two interconnected hierarchical levels. The aim is to identify the optimum of an upper-level leader problem, subject to the optimality of a lower-level follower problem. Several problems from the domain of engineering, logistics, economics, and transportation have an inherent nested structure which requires them to be modeled as bilevel optimization problems. Increasing size and complexity of such problems has prompted active theoretical and practical interest in the design of efficient algorithms for bilevel optimization. Given the nested nature of bilevel problems, the computational effort (number of function evaluations) required to solve them is often quite high. In this article, we explore the use of a Memetic Algorithm (MA) to solve bilevel optimization problems. While MAs have been quite successful in solving single-level optimization problems, there have been relatively few studies exploring their potential for solving bilevel optimization problems. MAs essentially attempt to combine advantages of global and local search strategies to identify optimum solutions with low computational cost (function evaluations). The approach introduced in this article is a nested Bilevel Memetic Algorithm (BLMA). At both upper and lower levels, either a global or a local search method is used during different phases of the search. The performance of BLMA is presented on twenty-five standard test problems and two real-life applications. The results are compared with other established algorithms to demonstrate the efficacy of the proposed approach.
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.
2014-10-01
The Goddard cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The WRF is a widely used weather prediction system in the world. It development is a done in collaborative around the globe. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the code of this important part of WRF. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU do. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 4.7x. Furthermore, the same optimizations improved performance on a dual socket Intel Xeon E5-2670 system by a factor of 2.8x compared to the original code.
Global Design Optimization for Aerodynamics and Rocket Propulsion Components
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)
2000-01-01
Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design practices and the need for future research are identified.
Candidate eco-friendly gas mixtures for MPGDs
NASA Astrophysics Data System (ADS)
Benussi, L.; Bianco, S.; Saviano, G.; Muhammad, S.; Piccolo, D.; Ferrini, M.; Parvis, M.; Grassini, S.; Colafranceschi, S.; Kjølbro, J.; Sharma, A.; Yang, D.; Chen, G.; Ban, Y.; Li, Q.
2018-02-01
Modern gas detectors for detection of particles require F-based gases for optimal performance. Recent regulations demand the use of environmentally unfriendly F-based gases to be limited or banned. This review studies properties of potential eco-friendly gas candidate replacements.
Best Practices for Optimizing DoD Contractor Safety and Occupational Health Program Performance
2012-12-01
such as Accident Prevention Plan (APP), Activity Hazard Analysis (AHA), Quality Assurance Surveillance Plans (QASP), etc. Contract administration...technology support, medical , and maintenance of equipment and facilities. The DoD Guidebook for the Acquisition of Services, provides acquisition...OSHA regulations and perform in accordance with an applicable accident prevention program that complies with State and Federal requirements. The
A Generalized Decision Framework Using Multi-objective Optimization for Water Resources Planning
NASA Astrophysics Data System (ADS)
Basdekas, L.; Stewart, N.; Triana, E.
2013-12-01
Colorado Springs Utilities (CSU) is currently engaged in an Integrated Water Resource Plan (IWRP) to address the complex planning scenarios, across multiple time scales, currently faced by CSU. The modeling framework developed for the IWRP uses a flexible data-centered Decision Support System (DSS) with a MODSIM-based modeling system to represent the operation of the current CSU raw water system coupled with a state-of-the-art multi-objective optimization algorithm. Three basic components are required for the framework, which can be implemented for planning horizons ranging from seasonal to interdecadal. First, a water resources system model is required that is capable of reasonable system simulation to resolve performance metrics at the appropriate temporal and spatial scales of interest. The system model should be an existing simulation model, or one developed during the planning process with stakeholders, so that 'buy-in' has already been achieved. Second, a hydrologic scenario tool(s) capable of generating a range of plausible inflows for the planning period of interest is required. This may include paleo informed or climate change informed sequences. Third, a multi-objective optimization model that can be wrapped around the system simulation model is required. The new generation of multi-objective optimization models do not require parameterization which greatly reduces problem complexity. Bridging the gap between research and practice will be evident as we use a case study from CSU's planning process to demonstrate this framework with specific competing water management objectives. Careful formulation of objective functions, choice of decision variables, and system constraints will be discussed. Rather than treating results as theoretically Pareto optimal in a planning process, we use the powerful multi-objective optimization models as tools to more efficiently and effectively move out of the inferior decision space. The use of this framework will help CSU evaluate tradeoffs in a continually changing world.
Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.
Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this papermore » shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.« less
NASA Technical Reports Server (NTRS)
Farrara, John D.; Drummond, Leroy A.; Mechoso, Carlos R.; Spahr, Joseph A.
1998-01-01
The design, implementation and performance optimization on the CRAY T3E of an atmospheric general circulation model (AGCM) which includes the transport of, and chemical reactions among, an arbitrary number of constituents is reviewed. The parallel implementation is based on a two-dimensional (longitude and latitude) data domain decomposition. Initial optimization efforts centered on minimizing the impact of substantial static and weakly-dynamic load imbalances among processors through load redistribution schemes. Recent optimization efforts have centered on single-node optimization. Strategies employed include loop unrolling, both manually and through the compiler, the use of an optimized assembler-code library for special function calls, and restructuring of parts of the code to improve data locality. Data exchanges and synchronizations involved in coupling different data-distributed models can account for a significant fraction of the running time. Therefore, the required scattering and gathering of data must be optimized. In systems such as the T3E, there is much more aggregate bandwidth in the total system than in any particular processor. This suggests a distributed design. The design and implementation of a such distributed 'Data Broker' as a means to efficiently couple the components of our climate system model is described.
Fine-Tuning ADAS Algorithm Parameters for Optimizing Traffic ...
With the development of the Connected Vehicle technology that facilitates wirelessly communication among vehicles and road-side infrastructure, the Advanced Driver Assistance Systems (ADAS) can be adopted as an effective tool for accelerating traffic safety and mobility optimization at various highway facilities. To this end, the traffic management centers identify the optimal ADAS algorithm parameter set that enables the maximum improvement of the traffic safety and mobility performance, and broadcast the optimal parameter set wirelessly to individual ADAS-equipped vehicles. After adopting the optimal parameter set, the ADAS-equipped drivers become active agents in the traffic stream that work collectively and consistently to prevent traffic conflicts, lower the intensity of traffic disturbances, and suppress the development of traffic oscillations into heavy traffic jams. Successful implementation of this objective requires the analysis capability of capturing the impact of the ADAS on driving behaviors, and measuring traffic safety and mobility performance under the influence of the ADAS. To address this challenge, this research proposes a synthetic methodology that incorporates the ADAS-affected driving behavior modeling and state-of-the-art microscopic traffic flow modeling into a virtually simulated environment. Building on such an environment, the optimal ADAS algorithm parameter set is identified through an optimization programming framework to enable th
The effects of experimental pain and induced optimism on working memory task performance.
Boselie, Jantine J L M; Vancleef, Linda M G; Peters, Madelon L
2016-07-01
Pain can interrupt and deteriorate executive task performance. We have previously shown that experimentally induced optimism can diminish the deteriorating effect of cold pressor pain on a subsequent working memory task (i.e., operation span task). In two successive experiments we sought further evidence for the protective role of optimism on pain-induced working memory impairments. We used another working memory task (i.e., 2-back task) that was performed either after or during pain induction. Study 1 employed a 2 (optimism vs. no-optimism)×2 (pain vs. no-pain)×2 (pre-score vs. post-score) mixed factorial design. In half of the participants optimism was induced by the Best Possible Self (BPS) manipulation, which required them to write and visualize about a life in the future where everything turned out for the best. In the control condition, participants wrote and visualized a typical day in their life (TD). Next, participants completed either the cold pressor task (CPT) or a warm water control task (WWCT). Before (baseline) and after the CPT or WWCT participants working memory performance was measured with the 2-back task. The 2-back task measures the ability to monitor and update working memory representation by asking participants to indicate whether the current stimulus corresponds to the stimulus that was presented 2 stimuli ago. Study 2 had a 2 (optimism vs. no-optimism)×2 (pain vs. no-pain) mixed factorial design. After receiving the BPS or control manipulation, participants completed the 2-back task twice: once with painful heat stimulation, and once without any stimulation (counter-balanced order). Continuous heat stimulation was used with temperatures oscillating around 1°C above and 1°C below the individual pain threshold. In study 1, the results did not show an effect of cold pressor pain on subsequent 2-back task performance. Results of study 2 indicated that heat pain impaired concurrent 2-back task performance. However, no evidence was found that optimism protected against this pain-induced performance deterioration. Experimentally induced pain impairs concurrent but not subsequent working memory task performance. Manipulated optimism did not counteract pain-induced deterioration of 2-back performance. It is important to explore factors that may diminish the negative impact of pain on the ability to function in daily life, as pain itself often cannot be remediated. We are planning to conduct future studies that should shed further light on the conditions, contexts and executive operations for which optimism can act as a protective factor. Copyright © 2016 Scandinavian Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
The Aeronautical Data Link: Decision Framework for Architecture Analysis
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Goode, Plesent W.
2003-01-01
A decision analytic approach that develops optimal data link architecture configuration and behavior to meet multiple conflicting objectives of concurrent and different airspace operations functions has previously been developed. The approach, premised on a formal taxonomic classification that correlates data link performance with operations requirements, information requirements, and implementing technologies, provides a coherent methodology for data link architectural analysis from top-down and bottom-up perspectives. This paper follows the previous research by providing more specific approaches for mapping and transitioning between the lower levels of the decision framework. The goal of the architectural analysis methodology is to assess the impact of specific architecture configurations and behaviors on the efficiency, capacity, and safety of operations. This necessarily involves understanding the various capabilities, system level performance issues and performance and interface concepts related to the conceptual purpose of the architecture and to the underlying data link technologies. Efficient and goal-directed data link architectural network configuration is conditioned on quantifying the risks and uncertainties associated with complex structural interface decisions. Deterministic and stochastic optimal design approaches will be discussed that maximize the effectiveness of architectural designs.
Chen, Xi; Xu, Yixuan; Liu, Anfeng
2017-04-19
High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%.
Chen, Xi; Xu, Yixuan; Liu, Anfeng
2017-01-01
High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs). However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%. PMID:28422062
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Shankar; Karri, Naveen K.; Gogna, Pawan K.
2012-03-13
Enormous military and commercial interests exist in developing quiet, lightweight, and compact thermoelectric (TE) power generation systems. This paper investigates design integration and analysis of an advanced TE power generation system implementing JP-8 fueled combustion and thermal recuperation. Design and development of a portable TE power system using a JP-8 combustor as a high temperature heat source and optimal process flows depend on efficient heat generation, transfer, and recovery within the system are explored. Design optimization of the system required considering the combustion system efficiency and TE conversion efficiency simultaneously. The combustor performance and TE sub-system performance were coupled directlymore » through exhaust temperatures, fuel and air mass flow rates, heat exchanger performance, subsequent hot-side temperatures, and cold-side cooling techniques and temperatures. Systematic investigation of this system relied on accurate thermodynamic modeling of complex, high-temperature combustion processes concomitantly with detailed thermoelectric converter thermal/mechanical modeling. To this end, this work reports on design integration of systemlevel process flow simulations using commercial software CHEMCADTM with in-house thermoelectric converter and module optimization, and heat exchanger analyses using COMSOLTM software. High-performance, high-temperature TE materials and segmented TE element designs are incorporated in coupled design analyses to achieve predicted TE subsystem level conversion efficiencies exceeding 10%. These TE advances are integrated with a high performance microtechnology combustion reactor based on recent advances at the Pacific Northwest National Laboratory (PNNL). Predictions from this coupled simulation established a basis for optimal selection of fuel and air flow rates, thermoelectric module design and operating conditions, and microtechnology heat-exchanger design criteria. This paper will discuss this simulation process that leads directly to system efficiency power maps defining potentially available optimal system operating conditions and regimes. This coupled simulation approach enables pathways for integrated use of high-performance combustor components, high performance TE devices, and microtechnologies to produce a compact, lightweight, combustion driven TE power system prototype that operates on common fuels.« less
Kumar, Manjeet; Rawat, Tarun Kumar; Aggarwal, Apoorva
2017-03-01
In this paper, a new meta-heuristic optimization technique, called interior search algorithm (ISA) with Lèvy flight is proposed and applied to determine the optimal parameters of an unknown infinite impulse response (IIR) system for the system identification problem. ISA is based on aesthetics, which is commonly used in interior design and decoration processes. In ISA, composition phase and mirror phase are applied for addressing the nonlinear and multimodal system identification problems. System identification using modified-ISA (M-ISA) based method involves faster convergence, single parameter tuning and does not require derivative information because it uses a stochastic random search using the concepts of Lèvy flight. A proper tuning of control parameter has been performed in order to achieve a balance between intensification and diversification phases. In order to evaluate the performance of the proposed method, mean square error (MSE), computation time and percentage improvement are considered as the performance measure. To validate the performance of M-ISA based method, simulations has been carried out for three benchmarked IIR systems using same order and reduced order system. Genetic algorithm (GA), particle swarm optimization (PSO), cat swarm optimization (CSO), cuckoo search algorithm (CSA), differential evolution using wavelet mutation (DEWM), firefly algorithm (FFA), craziness based particle swarm optimization (CRPSO), harmony search (HS) algorithm, opposition based harmony search (OHS) algorithm, hybrid particle swarm optimization-gravitational search algorithm (HPSO-GSA) and ISA are also used to model the same examples and simulation results are compared. Obtained results confirm the efficiency of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Robust optimization of a tandem grating solar thermal absorber
NASA Astrophysics Data System (ADS)
Choi, Jongin; Kim, Mingeon; Kang, Kyeonghwan; Lee, Ikjin; Lee, Bong Jae
2018-04-01
Ideal solar thermal absorbers need to have a high value of the spectral absorptance in the broad solar spectrum to utilize the solar radiation effectively. Majority of recent studies about solar thermal absorbers focus on achieving nearly perfect absorption using nanostructures, whose characteristic dimension is smaller than the wavelength of sunlight. However, precise fabrication of such nanostructures is not easy in reality; that is, unavoidable errors always occur to some extent in the dimension of fabricated nanostructures, causing an undesirable deviation of the absorption performance between the designed structure and the actually fabricated one. In order to minimize the variation in the solar absorptance due to the fabrication error, the robust optimization can be performed during the design process. However, the optimization of solar thermal absorber considering all design variables often requires tremendous computational costs to find an optimum combination of design variables with the robustness as well as the high performance. To achieve this goal, we apply the robust optimization using the Kriging method and the genetic algorithm for designing a tandem grating solar absorber. By constructing a surrogate model through the Kriging method, computational cost can be substantially reduced because exact calculation of the performance for every combination of variables is not necessary. Using the surrogate model and the genetic algorithm, we successfully design an effective solar thermal absorber exhibiting a low-level of performance degradation due to the fabrication uncertainty of design variables.
Application of Semi Active Control Techniques to the Damping Suppression Problem of Solar Sail Booms
NASA Technical Reports Server (NTRS)
Adetona, O.; Keel, L. H.; Whorton, M. S.
2007-01-01
Solar sails provide a propellant free form for space propulsion. These are large flat surfaces that generate thrust when they are impacted by light. When attached to a space vehicle, the thrust generated can propel the space vehicle to great distances at significant speeds. For optimal performance the sail must be kept from excessive vibration. Active control techniques can provide the best performance. However, they require an external power-source that may create significant parasitic mass to the solar sail. However, solar sails require low mass for optimal performance. Secondly, active control techniques typically require a good system model to ensure stability and performance. However, the accuracy of solar sail models validated on earth for a space environment is questionable. An alternative approach is passive vibration techniques. These do not require an external power supply, and do not destabilize the system. A third alternative is referred to as semi-active control. This approach tries to get the best of both active and passive control, while avoiding their pitfalls. In semi-active control, an active control law is designed for the system, and passive control techniques are used to implement it. As a result, no external power supply is needed so the system is not destabilize-able. Though it typically underperforms active control techniques, it has been shown to out-perform passive control approaches and can be unobtrusively installed on a solar sail boom. Motivated by this, the objective of this research is to study the suitability of a Piezoelectric (PZT) patch actuator/sensor based semi-active control system for the vibration suppression problem of solar sail booms. Accordingly, we develop a suitable mathematical and computer model for such studies and demonstrate the capabilities of the proposed approach with computer simulations.
Comparison of DNQ/novolac resists for e-beam exposure
NASA Astrophysics Data System (ADS)
Fedynyshyn, Theodore H.; Doran, Scott P.; Lind, Michele L.; Lyszczarz, Theodore M.; DiNatale, William F.; Lennon, Donna; Sauer, Charles A.; Meute, Jeff
1999-12-01
We have surveyed the commercial resist market with the dual purpose of identifying diazoquinone/novolac based resists that have potential for use as e-beam mask making resists and baselining these resists for comparison against future mask making resist candidates. For completeness, this survey would require that each resist be compared with an optimized developer and development process. To accomplish this task in an acceptable time period, e-beam lithography modeling was employed to quickly identify the resist and developer combinations that lead to superior resist performance. We describe the verification of a method to quickly screen commercial i-line resists with different developers, by determining modeling parameters for i-line resists from e-beam exposures, modeling the resist performance, and comparing predicted performance versus actual performance. We determined the lithographic performance of several DNQ/novolac resists whose modeled performance suggests that sensitivities of less than 40 (mu) C/cm2 coupled with less than 10-nm CD change per percent change in dose are possible for target 600-nm features. This was accomplished by performing a series of statistically designed experiments on the leading resists candidates to optimize processing variables, followed by comparing experimentally determined resist sensitivities, latitudes, and profiles of the DNQ/novolac resists a their optimized process.
Cognitive radio adaptation for power consumption minimization using biogeography-based optimization
NASA Astrophysics Data System (ADS)
Qi, Pei-Han; Zheng, Shi-Lian; Yang, Xiao-Niu; Zhao, Zhi-Jin
2016-12-01
Adaptation is one of the key capabilities of cognitive radio, which focuses on how to adjust the radio parameters to optimize the system performance based on the knowledge of the radio environment and its capability and characteristics. In this paper, we consider the cognitive radio adaptation problem for power consumption minimization. The problem is formulated as a constrained power consumption minimization problem, and the biogeography-based optimization (BBO) is introduced to solve this optimization problem. A novel habitat suitability index (HSI) evaluation mechanism is proposed, in which both the power consumption minimization objective and the quality of services (QoS) constraints are taken into account. The results show that under different QoS requirement settings corresponding to different types of services, the algorithm can minimize power consumption while still maintaining the QoS requirements. Comparison with particle swarm optimization (PSO) and cat swarm optimization (CSO) reveals that BBO works better, especially at the early stage of the search, which means that the BBO is a better choice for real-time applications. Project supported by the National Natural Science Foundation of China (Grant No. 61501356), the Fundamental Research Funds of the Ministry of Education, China (Grant No. JB160101), and the Postdoctoral Fund of Shaanxi Province, China.
Taguchi experimental design to determine the taste quality characteristic of candied carrot
NASA Astrophysics Data System (ADS)
Ekawati, Y.; Hapsari, A. A.
2018-03-01
Robust parameter design is used to design product that is robust to noise factors so the product’s performance fits the target and delivers a better quality. In the process of designing and developing the innovative product of candied carrot, robust parameter design is carried out using Taguchi Method. The method is used to determine an optimal quality design. The optimal quality design is based on the process and the composition of product ingredients that are in accordance with consumer needs and requirements. According to the identification of consumer needs from the previous research, quality dimensions that need to be assessed are the taste and texture of the product. The quality dimension assessed in this research is limited to the taste dimension. Organoleptic testing is used for this assessment, specifically hedonic testing that makes assessment based on consumer preferences. The data processing uses mean and signal to noise ratio calculation and optimal level setting to determine the optimal process/composition of product ingredients. The optimal value is analyzed using confirmation experiments to prove that proposed product match consumer needs and requirements. The result of this research is identification of factors that affect the product taste and the optimal quality of product according to Taguchi Method.
A method of network topology optimization design considering application process characteristic
NASA Astrophysics Data System (ADS)
Wang, Chunlin; Huang, Ning; Bai, Yanan; Zhang, Shuo
2018-03-01
Communication networks are designed to meet the usage requirements of users for various network applications. The current studies of network topology optimization design mainly considered network traffic, which is the result of network application operation, but not a design element of communication networks. A network application is a procedure of the usage of services by users with some demanded performance requirements, and has obvious process characteristic. In this paper, we first propose a method to optimize the design of communication network topology considering the application process characteristic. Taking the minimum network delay as objective, and the cost of network design and network connective reliability as constraints, an optimization model of network topology design is formulated, and the optimal solution of network topology design is searched by Genetic Algorithm (GA). Furthermore, we investigate the influence of network topology parameter on network delay under the background of multiple process-oriented applications, which can guide the generation of initial population and then improve the efficiency of GA. Numerical simulations show the effectiveness and validity of our proposed method. Network topology optimization design considering applications can improve the reliability of applications, and provide guidance for network builders in the early stage of network design, which is of great significance in engineering practices.
NASA Technical Reports Server (NTRS)
Calhoun, Phillip C.; Hampton, R. David; Whorton, Mark S.
2001-01-01
The acceleration environment on the International Space Station (ISS) will likely exceed the requirements of many micro-gravity experiments. The Glovebox Integrated Microgravity Isolation Technology (g-LIMIT) is being built by the NASA Marshall Space Flight Center to attenuate the nominal acceleration environment and provide some isolation for micro-gravity science experiments. G-LIMIT uses Lorentz (voice-coil) magnetic actuators to isolate a platform for mounting science payloads from the nominal acceleration environment. The system utilizes payload acceleration, relative position, and relative orientation measurements in a feedback controller to accomplish the vibration isolation task. The controller provides current command to six magnetic actuators, producing the required experiment isolation from the ISS acceleration environment. This paper presents the development of a candidate control law to meet the acceleration attenuation requirements for the g-LIMIT experiment platform. The controller design is developed using linear optimal control techniques for both frequency-weighted H(sub 2) and H(sub infinity) norms. Comparison of the performance and robustness to plant uncertainty for these two optimal control design approaches are included in the discussion.
Aerodynamic shape optimization using preconditioned conjugate gradient methods
NASA Technical Reports Server (NTRS)
Burgreen, Greg W.; Baysal, Oktay
1993-01-01
In an effort to further improve upon the latest advancements made in aerodynamic shape optimization procedures, a systematic study is performed to examine several current solution methodologies as applied to various aspects of the optimization procedure. It is demonstrated that preconditioned conjugate gradient-like methodologies dramatically decrease the computational efforts required for such procedures. The design problem investigated is the shape optimization of the upper and lower surfaces of an initially symmetric (NACA-012) airfoil in inviscid transonic flow and at zero degree angle-of-attack. The complete surface shape is represented using a Bezier-Bernstein polynomial. The present optimization method then automatically obtains supercritical airfoil shapes over a variety of freestream Mach numbers. Furthermore, the best optimization strategy examined resulted in a factor of 8 decrease in computational time as well as a factor of 4 decrease in memory over the most efficient strategies in current use.
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
NASA Astrophysics Data System (ADS)
Cameron, Christopher J.; Lind Nordgren, Eleonora; Wennhage, Per; Göransson, Peter
2014-06-01
Balancing structural and acoustic performance of a multi-layered sandwich panel is a formidable undertaking. Frequently the gains achieved in terms of reduced weight, still meeting the structural design requirements, are lost by the changes necessary to regain acceptable acoustic performance. To alleviate this, a design method for a multifunctional load bearing vehicle body panel is proposed which attempts to achieve a balance between structural and acoustic performance. The approach is based on numerical modelling of the structural and acoustic behaviour in a combined topology, size, and property optimization in order to achieve a three dimensional optimal distribution of structural and acoustic foam materials within the bounding surfaces of a sandwich panel. In particular the effects of the coupling between one of the bounding surface face sheets and acoustic foam are examined for its impact on both the structural and acoustic overall performance of the panel. The results suggest a potential in introducing an air gap between the acoustic foam parts and one of the face sheets, provided that the structural design constraints are met without prejudicing the layout of the different foam types.
System Analysis and Performance Benefits of an Optimized Rotorcraft Propulsion System
NASA Technical Reports Server (NTRS)
Bruckner, Robert J.
2007-01-01
The propulsion system of rotorcraft vehicles is the most critical system to the vehicle in terms of safety and performance. The propulsion system must provide both vertical lift and forward flight propulsion during the entire mission. Whereas propulsion is a critical element for all flight vehicles, it is particularly critical for rotorcraft due to their limited safe, un-powered landing capability. This unparalleled reliability requirement has led rotorcraft power plants down a certain evolutionary path in which the system looks and performs quite similarly to those of the 1960 s. By and large the advancements in rotorcraft propulsion have come in terms of safety and reliability and not in terms of performance. The concept of the optimized propulsion system is a means by which both reliability and performance can be improved for rotorcraft vehicles. The optimized rotorcraft propulsion system which couples an oil-free turboshaft engine to a highly loaded gearbox that provides axial load support for the power turbine can be designed with current laboratory proven technology. Such a system can provide up to 60% weight reduction of the propulsion system of rotorcraft vehicles. Several technical challenges are apparent at the conceptual design level and should be addressed with current research.
Bearing optimization for SSME HPOTP application
NASA Technical Reports Server (NTRS)
Armstrong, Elizabeth S.; Coe, Harold H.
1988-01-01
The space shuttle main engine (SSME) high-pressure oxygen turbopumps (HPOTP) have not experienced the service life required of them. To improve the life of the existing turbopump bearings, modifications to the bearings that could be retrofitted into the present bearing cavity are being investigated. Several bearing parameters were optimized using the computer program SHABERTH, which performs a thermomechanical simulation of a load support system. The computer analysis showed that improved bearing performance is feasible if low friction coefficients can be attained. Bearing geometries were optimized considering heat generation, equilibrium temperatures, and relative life. Two sets of curvatures were selected from the optimization: an inner-raceway curvature of 0.54, an outer-raceway curvature of 0.52, and an inner-raceway curvature of 0.55, an outer-raceway curvature of 0.53. A contact angle of 16 deg was also selected. Thermal gradients through the bearings were found to be lower with liquid lubrication than with solid film lubrication. As the coolant flowrate through the bearing increased, the ball temperature decreased but at a continuously decreasing rate. The optimum flowrate was approximately 4 kg/s. The analytical modeling used to determine these feasible modifications to improve bearing performance is described.
Optimal digital dynamical decoupling for general decoherence via Walsh modulation
NASA Astrophysics Data System (ADS)
Qi, Haoyu; Dowling, Jonathan P.; Viola, Lorenza
2017-11-01
We provide a general framework for constructing digital dynamical decoupling sequences based on Walsh modulation—applicable to arbitrary qubit decoherence scenarios. By establishing equivalence between decoupling design based on Walsh functions and on concatenated projections, we identify a family of optimal Walsh sequences, which can be exponentially more efficient, in terms of the required total pulse number, for fixed cancellation order, than known digital sequences based on concatenated design. Optimal sequences for a given cancellation order are highly non-unique—their performance depending sensitively on the control path. We provide an analytic upper bound to the achievable decoupling error and show how sequences within the optimal Walsh family can substantially outperform concatenated decoupling in principle, while respecting realistic timing constraints.
Techniques for designing rotorcraft control systems
NASA Technical Reports Server (NTRS)
Yudilevitch, Gil; Levine, William S.
1994-01-01
Over the last two and a half years we have been demonstrating a new methodology for the design of rotorcraft flight control systems (FCS) to meet handling qualities requirements. This method is based on multicriterion optimization as implemented in the optimization package CONSOL-OPTCAD (C-O). This package has been developed at the Institute for Systems Research (ISR) at the University of Maryland at College Park. This design methodology has been applied to the design of a FCS for the UH-60A helicopter in hover having the ADOCS control structure. The controller parameters have been optimized to meet the ADS-33C specifications. Furthermore, using this approach, an optimal (minimum control energy) controller has been obtained and trade-off studies have been performed.
Integrated solar energy system optimization
NASA Astrophysics Data System (ADS)
Young, S. K.
1982-11-01
The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.
Case study: Optimizing fault model input parameters using bio-inspired algorithms
NASA Astrophysics Data System (ADS)
Plucar, Jan; Grunt, Onřej; Zelinka, Ivan
2017-07-01
We present a case study that demonstrates a bio-inspired approach in the process of finding optimal parameters for GSM fault model. This model is constructed using Petri Nets approach it represents dynamic model of GSM network environment in the suburban areas of Ostrava city (Czech Republic). We have been faced with a task of finding optimal parameters for an application that requires high amount of data transfers between the application itself and secure servers located in datacenter. In order to find the optimal set of parameters we employ bio-inspired algorithms such as Differential Evolution (DE) or Self Organizing Migrating Algorithm (SOMA). In this paper we present use of these algorithms, compare results and judge their performance in fault probability mitigation.
Optimization of seismic isolation systems via harmony search
NASA Astrophysics Data System (ADS)
Melih Nigdeli, Sinan; Bekdaş, Gebrail; Alhan, Cenk
2014-11-01
In this article, the optimization of isolation system parameters via the harmony search (HS) optimization method is proposed for seismically isolated buildings subjected to both near-fault and far-fault earthquakes. To obtain optimum values of isolation system parameters, an optimization program was developed in Matlab/Simulink employing the HS algorithm. The objective was to obtain a set of isolation system parameters within a defined range that minimizes the acceleration response of a seismically isolated structure subjected to various earthquakes without exceeding a peak isolation system displacement limit. Several cases were investigated for different isolation system damping ratios and peak displacement limitations of seismic isolation devices. Time history analyses were repeated for the neighbouring parameters of optimum values and the results proved that the parameters determined via HS were true optima. The performance of the optimum isolation system was tested under a second set of earthquakes that was different from the first set used in the optimization process. The proposed optimization approach is applicable to linear isolation systems. Isolation systems composed of isolation elements that are inherently nonlinear are the subject of a future study. Investigation of the optimum isolation system parameters has been considered in parametric studies. However, obtaining the best performance of a seismic isolation system requires a true optimization by taking the possibility of both near-fault and far-fault earthquakes into account. HS optimization is proposed here as a viable solution to this problem.
Dynamical modeling and multi-experiment fitting with PottersWheel
Maiwald, Thomas; Timmer, Jens
2008-01-01
Motivation: Modelers in Systems Biology need a flexible framework that allows them to easily create new dynamic models, investigate their properties and fit several experimental datasets simultaneously. Multi-experiment-fitting is a powerful approach to estimate parameter values, to check the validity of a given model, and to discriminate competing model hypotheses. It requires high-performance integration of ordinary differential equations and robust optimization. Results: We here present the comprehensive modeling framework Potters-Wheel (PW) including novel functionalities to satisfy these requirements with strong emphasis on the inverse problem, i.e. data-based modeling of partially observed and noisy systems like signal transduction pathways and metabolic networks. PW is designed as a MATLAB toolbox and includes numerous user interfaces. Deterministic and stochastic optimization routines are combined by fitting in logarithmic parameter space allowing for robust parameter calibration. Model investigation includes statistical tests for model-data-compliance, model discrimination, identifiability analysis and calculation of Hessian- and Monte-Carlo-based parameter confidence limits. A rich application programming interface is available for customization within own MATLAB code. Within an extensive performance analysis, we identified and significantly improved an integrator–optimizer pair which decreases the fitting duration for a realistic benchmark model by a factor over 3000 compared to MATLAB with optimization toolbox. Availability: PottersWheel is freely available for academic usage at http://www.PottersWheel.de/. The website contains a detailed documentation and introductory videos. The program has been intensively used since 2005 on Windows, Linux and Macintosh computers and does not require special MATLAB toolboxes. Contact: maiwald@fdm.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:18614583
Efficient droplet router for digital microfluidic biochip using particle swarm optimizer
NASA Astrophysics Data System (ADS)
Pan, Indrajit; Samanta, Tuhina
2013-01-01
Digital Microfluidic Biochip has emerged as a revolutionary finding in the field of micro-electromechanical research. Different complex bioassays and pathological analysis are being efficiently performed on this miniaturized chip with negligible amount of sample specimens. Initially biochip was invented on continuous-fluid-flow mechanism but later it has evolved with more efficient concept of digital-fluid-flow. These second generation biochips are capable of serving more complex bioassays. This operational change in biochip technology emerged with the requirement of high end computer aided design needs for physical design automation. The change also paved new avenues of research to assist the proficient design automation. Droplet routing is one of those major aspects where it necessarily requires minimization of both routing completion time and total electrode usage. This task involves optimization of multiple associated parameters. In this paper we have proposed a particle swarm optimization based approach for droplet outing. The process mainly operates in two phases where initially we perform clustering of state space and classification of nets into designated clusters. This helps us to reduce solution space by redefining local sub optimal target in the interleaved space between source and global target of a net. In the next phase we resolve the concurrent routing issues of every sub optimal situation to generate final routing schedule. The method was applied on some standard test benches and hard test sets. Comparative analysis of experimental results shows good improvement on the aspect of unit cell usage, routing completion time and execution time over some well existing methods.
A hybrid optimization approach in non-isothermal glass molding
NASA Astrophysics Data System (ADS)
Vu, Anh-Tuan; Kreilkamp, Holger; Krishnamoorthi, Bharathwaj Janaki; Dambon, Olaf; Klocke, Fritz
2016-10-01
Intensively growing demands on complex yet low-cost precision glass optics from the today's photonic market motivate the development of an efficient and economically viable manufacturing technology for complex shaped optics. Against the state-of-the-art replication-based methods, Non-isothermal Glass Molding turns out to be a promising innovative technology for cost-efficient manufacturing because of increased mold lifetime, less energy consumption and high throughput from a fast process chain. However, the selection of parameters for the molding process usually requires a huge effort to satisfy precious requirements of the molded optics and to avoid negative effects on the expensive tool molds. Therefore, to reduce experimental work at the beginning, a coupling CFD/FEM numerical modeling was developed to study the molding process. This research focuses on the development of a hybrid optimization approach in Non-isothermal glass molding. To this end, an optimal configuration with two optimization stages for multiple quality characteristics of the glass optics is addressed. The hybrid Back-Propagation Neural Network (BPNN)-Genetic Algorithm (GA) is first carried out to realize the optimal process parameters and the stability of the process. The second stage continues with the optimization of glass preform using those optimal parameters to guarantee the accuracy of the molded optics. Experiments are performed to evaluate the effectiveness and feasibility of the model for the process development in Non-isothermal glass molding.
Shan, Haijun; Xu, Haojie; Zhu, Shanan; He, Bin
2015-10-21
For sensorimotor rhythms based brain-computer interface (BCI) systems, classification of different motor imageries (MIs) remains a crucial problem. An important aspect is how many scalp electrodes (channels) should be used in order to reach optimal performance classifying motor imaginations. While the previous researches on channel selection mainly focus on MI tasks paradigms without feedback, the present work aims to investigate the optimal channel selection in MI tasks paradigms with real-time feedback (two-class control and four-class control paradigms). In the present study, three datasets respectively recorded from MI tasks experiment, two-class control and four-class control experiments were analyzed offline. Multiple frequency-spatial synthesized features were comprehensively extracted from every channel, and a new enhanced method IterRelCen was proposed to perform channel selection. IterRelCen was constructed based on Relief algorithm, but was enhanced from two aspects: change of target sample selection strategy and adoption of the idea of iterative computation, and thus performed more robust in feature selection. Finally, a multiclass support vector machine was applied as the classifier. The least number of channels that yield the best classification accuracy were considered as the optimal channels. One-way ANOVA was employed to test the significance of performance improvement among using optimal channels, all the channels and three typical MI channels (C3, C4, Cz). The results show that the proposed method outperformed other channel selection methods by achieving average classification accuracies of 85.2, 94.1, and 83.2 % for the three datasets, respectively. Moreover, the channel selection results reveal that the average numbers of optimal channels were significantly different among the three MI paradigms. It is demonstrated that IterRelCen has a strong ability for feature selection. In addition, the results have shown that the numbers of optimal channels in the three different motor imagery BCI paradigms are distinct. From a MI task paradigm, to a two-class control paradigm, and to a four-class control paradigm, the number of required channels for optimizing the classification accuracy increased. These findings may provide useful information to optimize EEG based BCI systems, and further improve the performance of noninvasive BCI.
NASA Technical Reports Server (NTRS)
Bladwin, Richard S.
2009-01-01
As NASA embarks on a renewed human presence in space, safe, human-rated, electrical energy storage and power generation technologies, which will be capable of demonstrating reliable performance in a variety of unique mission environments, will be required. To address the future performance and safety requirements for the energy storage technologies that will enhance and enable future NASA Constellation Program elements and other future aerospace missions, advanced rechargeable, lithium-ion battery technology development is being pursued with an emphasis on addressing performance technology gaps between state-of-the-art capabilities and critical future mission requirements. The material attributes and related performance of a lithium-ion cell's internal separator component are critical for achieving overall optimal performance, safety and reliability. This review provides an overview of the general types, material properties and the performance and safety characteristics of current separator materials employed in lithium-ion batteries, such as those materials that are being assessed and developed for future aerospace missions.
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Li, Wesley W.
2009-01-01
Supporting the Aeronautics Research Mission Directorate guidelines, the National Aeronautics and Space Administration [NASA] Dryden Flight Research Center is developing a multidisciplinary design, analysis, and optimization [MDAO] tool. This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Today s modern aircraft designs in transonic speed are a challenging task due to the computation time required for the unsteady aeroelastic analysis using a Computational Fluid Dynamics [CFD] code. Design approaches in this speed regime are mainly based on the manual trial and error. Because of the time required for unsteady CFD computations in time-domain, this will considerably slow down the whole design process. These analyses are usually performed repeatedly to optimize the final design. As a result, there is considerable motivation to be able to perform aeroelastic calculations more quickly and inexpensively. This paper will describe the development of unsteady transonic aeroelastic design methodology for design optimization using reduced modeling method and unsteady aerodynamic approximation. The method requires the unsteady transonic aerodynamics be represented in the frequency or Laplace domain. Dynamically linear assumption is used for creating Aerodynamic Influence Coefficient [AIC] matrices in transonic speed regime. Unsteady CFD computations are needed for the important columns of an AIC matrix which corresponded to the primary modes for the flutter. Order reduction techniques, such as Guyan reduction and improved reduction system, are used to reduce the size of problem transonic flutter can be found by the classic methods, such as Rational function approximation, p-k, p, root-locus etc. Such a methodology could be incorporated into MDAO tool for design optimization at a reasonable computational cost. The proposed technique is verified using the Aerostructures Test Wing 2 actually designed, built, and tested at NASA Dryden Flight Research Center. The results from the full order model and the approximate reduced order model are analyzed and compared.
Optimization of Smart Structure for Improving Servo Performance of Hard Disk Drive
NASA Astrophysics Data System (ADS)
Kajiwara, Itsuro; Takahashi, Masafumi; Arisaka, Toshihiro
Head positioning accuracy of the hard disk drive should be improved to meet today's increasing performance demands. Vibration suppression of the arm in the hard disk drive is very important to enhance the servo bandwidth of the head positioning system. In this study, smart structure technology is introduced into the hard disk drive to suppress the vibration of the head actuator. It has been expected that the smart structure technology will contribute to the development of small and light-weight mechatronics devices with the required performance. First, modeling of the system is conducted with finite element method and modal analysis. Next, the actuator location and the control system are simultaneously optimized using genetic algorithm. Vibration control effect with the proposed vibration control mechanisms has been evaluated by some simulations.
Mach 6.5 air induction system design for the Beta 2 two-stage-to-orbit booster vehicle
NASA Technical Reports Server (NTRS)
Midea, Anthony C.
1991-01-01
A preliminary, two-dimensional, mixed compression air induction system is designed for the Beta II Two Stage to Orbit booster vehicle to minimize installation losses and efficiently deliver the required airflow. Design concepts, such as an external isentropic compression ramp and a bypass system were developed and evaluated for performance benefits. The design was optimized by maximizing installed propulsion/vehicle system performance. The resulting system design operating characteristics and performance are presented. The air induction system design has significantly lower transonic drag than similar designs and only requires about 1/3 of the bleed extraction. In addition, the design efficiently provides the integrated system required airflow, while maintaining adequate levels of total pressure recovery. The excellent performance of this highly integrated air induction system is essential for the successful completion of the Beta II booster vehicle mission.
NASA Technical Reports Server (NTRS)
Savage, M.; Mackulin, M. J.; Coe, H. H.; Coy, J. J.
1991-01-01
Optimization procedures allow one to design a spur gear reduction for maximum life and other end use criteria. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial guess values. The optimization algorithm is described, and the models for gear life and performance are presented. The algorithm is compact and has been programmed for execution on a desk top computer. Two examples are presented to illustrate the method and its application.
Reducing maintenance costs in agreement with CNC machine tools reliability
NASA Astrophysics Data System (ADS)
Ungureanu, A. L.; Stan, G.; Butunoi, P. A.
2016-08-01
Aligning maintenance strategy with reliability is a challenge due to the need to find an optimal balance between them. Because the various methods described in the relevant literature involve laborious calculations or use of software that can be costly, this paper proposes a method that is easier to implement on CNC machine tools. The new method, called the Consequence of Failure Analysis (CFA) is based on technical and economic optimization, aimed at obtaining a level of required performance with minimum investment and maintenance costs.
NASA Astrophysics Data System (ADS)
Kiran, B. S.; Singh, Satyendra; Negi, Kuldeep
The GSAT-12 spacecraft is providing Communication services from the INSAT/GSAT system in the Indian region. The spacecraft carries 12 extended C-band transponders. GSAT-12 was launched by ISRO’s PSLV from Sriharikota, into a sub-geosynchronous Transfer Orbit (sub-GTO) of 284 x 21000 km with inclination 18 deg. This Mission successfully accomplished combined optimization of launch vehicle and satellite capabilities to maximize operational life of the s/c. This paper describes mission analysis carried out for GSAT-12 comprising launch window, orbital events study and orbit raising maneuver strategies considering various Mission operational constraints. GSAT-12 is equipped with two earth sensors (ES), three gyroscopes and digital sun sensor. The launch window was generated considering mission requirement of minimum 45 minutes of ES data for calibration of gyros with Roll-sun-pointing orientation in T.O. Since the T.O. period was a rather short 6.1 hr, required pitch biases were worked out to meet the gyro-calibration requirement. A 440 N Liquid Apogee Motor (LAM) is used for orbit raising. The objective of the maneuver strategy is to achieve desired drift orbit satisfying mission constraints and minimizing propellant expenditure. In case of sub-GTO, the optimal strategy is to first perform an in-plane maneuver at perigee to raise the apogee to synchronous level and then perform combined maneuvers at the synchronous apogee to achieve desired drift orbit. The perigee burn opportunities were examined considering ground station visibility requirement for monitoring the burn. Two maneuver strategies were proposed: an optimal five-burn strategy with two perigee burns centered around perigee#5 and perigee#8 with partial ground station visibility and three apogee burns with dual station visibility, a near-optimal five-burn strategy with two off-perigee burns at perigee#5 and perigee#8 with single ground station visibility and three apogee burns with dual station visibility. The range vector profiles were studied in the s/c frame during LAM burn phases and accurate polarization predictions were provided to supporting ground stations. The near optimal strategy was selected for implementation in order to ensure full visibility during each LAM burn. Contingency maneuver plans were generated in preparation for specified Propulsion system related contingencies. Maneuver plans were generated considering 3-sigma dispersions in T.O. GSAT-12 is positioned at 83 deg East longitude. The estimated operational life is about 11 years which was realized through operationally optimal maneuver strategy selected from the detailed mission analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell, John T; Holladay, John; Wagner, Robert
The U.S. Department of Energy's (DOE's) Co-Optimization of Fuels & Engines (Co-Optima) initiative is conducting the early-stage research needed to accelerate the market introduction of advanced fuel and engine technologies. The research includes both spark-ignition (SI) and compression-ignition (CI) combustion approaches, targeting applications that impact the entire on-road fleet (light-, medium-, and heavy-duty vehicles). The initiative's major goals include significant improvements in vehicle fuel economy, lower-cost pathways to reduce emissions, and leveraging diverse U.S. fuel resources. A key objective of Co-Optima's research is to identify new blendstocks that enhance current petroleum blending components, increase blendstock diversity, and provide refiners withmore » increased flexibility to blend fuels with the key properties required to optimize advanced internal combustion engines. This report identifies eight representative blendstocks from five chemical families that have demonstrated the potential to increase boosted SI engine efficiency, meet key fuel quality requirements, and be viable for production at commercial scale by 2025-2030.« less
[Optimization of the pseudorandom input signals used for the forced oscillation technique].
Liu, Xiaoli; Zhang, Nan; Liang, Hong; Zhang, Zhengbo; Li, Deyu; Wang, Weidong
2017-10-01
The forced oscillation technique (FOT) is an active pulmonary function measurement technique that was applied to identify the mechanical properties of the respiratory system using external excitation signals. FOT commonly includes single frequency sine, pseudorandom and periodic impulse excitation signals. Aiming at preventing the time-domain amplitude overshoot that might exist in the acquisition of combined multi sinusoidal pseudorandom signals, this paper studied the phase optimization of pseudorandom signals. We tried two methods including the random phase combination and time-frequency domain swapping algorithm to solve this problem, and used the crest factor to estimate the effect of optimization. Furthermore, in order to make the pseudorandom signals met the requirement of the respiratory system identification in 4-40 Hz, we compensated the input signals' amplitudes at the low frequency band (4-18 Hz) according to the frequency-response curve of the oscillation unit. Resuts showed that time-frequency domain swapping algorithm could effectively optimize the phase combination of pseudorandom signals. Moreover, when the amplitudes at low frequencies were compensated, the expected stimulus signals which met the performance requirements were obtained eventually.
Reliable numerical computation in an optimal output-feedback design
NASA Technical Reports Server (NTRS)
Vansteenwyk, Brett; Ly, Uy-Loi
1991-01-01
A reliable algorithm is presented for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters. The algorithm is a part of a design algorithm for optimal linear dynamic output-feedback controller that minimizes a finite-time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control-law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed-loop eigensystem. This approach through the use of an accurate Pade series approximation does not require the closed-loop system matrix to be diagonalizable. The algorithm was included in a control design package for optimal robust low-order controllers. Usefulness of the proposed numerical algorithm was demonstrated using numerous practical design cases where degeneracies occur frequently in the closed-loop system under an arbitrary controller design initialization and during the numerical search.
Aerodynamic optimization of supersonic compressor cascade using differential evolution on GPU
NASA Astrophysics Data System (ADS)
Aissa, Mohamed Hasanine; Verstraete, Tom; Vuik, Cornelis
2016-06-01
Differential Evolution (DE) is a powerful stochastic optimization method. Compared to gradient-based algorithms, DE is able to avoid local minima but requires at the same time more function evaluations. In turbomachinery applications, function evaluations are performed with time-consuming CFD simulation, which results in a long, non affordable, design cycle. Modern High Performance Computing systems, especially Graphic Processing Units (GPUs), are able to alleviate this inconvenience by accelerating the design evaluation itself. In this work we present a validated CFD Solver running on GPUs, able to accelerate the design evaluation and thus the entire design process. An achieved speedup of 20x to 30x enabled the DE algorithm to run on a high-end computer instead of a costly large cluster. The GPU-enhanced DE was used to optimize the aerodynamics of a supersonic compressor cascade, achieving an aerodynamic loss minimization of 20%.
On Adding Structure to Unstructured Overlay Networks
NASA Astrophysics Data System (ADS)
Leitão, João; Carvalho, Nuno A.; Pereira, José; Oliveira, Rui; Rodrigues, Luís
Unstructured peer-to-peer overlay networks are very resilient to churn and topology changes, while requiring little maintenance cost. Therefore, they are an infrastructure to build highly scalable large-scale services in dynamic networks. Typically, the overlay topology is defined by a peer sampling service that aims at maintaining, in each process, a random partial view of peers in the system. The resulting random unstructured topology is suboptimal when a specific performance metric is considered. On the other hand, structured approaches (for instance, a spanning tree) may optimize a given target performance metric but are highly fragile. In fact, the cost for maintaining structures with strong constraints may easily become prohibitive in highly dynamic networks. This chapter discusses different techniques that aim at combining the advantages of unstructured and structured networks. Namely we focus on two distinct approaches, one based on optimizing the overlay and another based on optimizing the gossip mechanism itself.
Meeting the challenges of developing LED-based projection displays
NASA Astrophysics Data System (ADS)
Geißler, Enrico
2006-04-01
The main challenge in developing a LED-based projection system is to meet the brightness requirements of the market. Therefore a balanced combination of optical, electrical and thermal parameters must be reached to achieve these performance and cost targets. This paper describes the system design methodology for a digital micromirror display (DMD) based optical engine using LEDs as the light source, starting at the basic physical and geometrical parameters of the DMD and other optical elements through characterization of the LEDs to optimizing the system performance by determining optimal driving conditions. LEDs have a luminous flux density which is just at the threshold of acceptance in projection systems and thus only a fully optimized optical system with a matched set of LEDs can be used. This work resulted in two projection engines, one for a compact pocket projector and the other for a rear projection television, both of which are currently in commercialization.
Quadruped Robot Locomotion using a Global Optimization Stochastic Algorithm
NASA Astrophysics Data System (ADS)
Oliveira, Miguel; Santos, Cristina; Costa, Lino; Ferreira, Manuel
2011-09-01
The problem of tuning nonlinear dynamical systems parameters, such that the attained results are considered good ones, is a relevant one. This article describes the development of a gait optimization system that allows a fast but stable robot quadruped crawl gait. We combine bio-inspired Central Patterns Generators (CPGs) and Genetic Algorithms (GA). CPGs are modelled as autonomous differential equations, that generate the necessar y limb movement to perform the required walking gait. The GA finds parameterizations of the CPGs parameters which attain good gaits in terms of speed, vibration and stability. Moreover, two constraint handling techniques based on tournament selection and repairing mechanism are embedded in the GA to solve the proposed constrained optimization problem and make the search more efficient. The experimental results, performed on a simulated Aibo robot, demonstrate that our approach allows low vibration with a high velocity and wide stability margin for a quadruped slow crawl gait.
Declarative language design for interactive visualization.
Heer, Jeffrey; Bostock, Michael
2010-01-01
We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.
Aerodynamic optimization of supersonic compressor cascade using differential evolution on GPU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aissa, Mohamed Hasanine; Verstraete, Tom; Vuik, Cornelis
Differential Evolution (DE) is a powerful stochastic optimization method. Compared to gradient-based algorithms, DE is able to avoid local minima but requires at the same time more function evaluations. In turbomachinery applications, function evaluations are performed with time-consuming CFD simulation, which results in a long, non affordable, design cycle. Modern High Performance Computing systems, especially Graphic Processing Units (GPUs), are able to alleviate this inconvenience by accelerating the design evaluation itself. In this work we present a validated CFD Solver running on GPUs, able to accelerate the design evaluation and thus the entire design process. An achieved speedup of 20xmore » to 30x enabled the DE algorithm to run on a high-end computer instead of a costly large cluster. The GPU-enhanced DE was used to optimize the aerodynamics of a supersonic compressor cascade, achieving an aerodynamic loss minimization of 20%.« less
Advanced rotorcraft control using parameter optimization
NASA Technical Reports Server (NTRS)
Vansteenwyk, Brett; Ly, Uy-Loi
1991-01-01
A reliable algorithm for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters is presented. The algorithm is part of a design algorithm for an optimal linear dynamic output feedback controller that minimizes a finite time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed loop eigensystem. This approach through the use of a accurate Pade series approximation does not require the closed loop system matrix to be diagonalizable. The algorithm has been included in a control design package for optimal robust low order controllers. Usefulness of the proposed numerical algorithm has been demonstrated using numerous practical design cases where degeneracies occur frequently in the closed loop system under an arbitrary controller design initialization and during the numerical search.
NASA Technical Reports Server (NTRS)
Cake, J. E.; Regetz, J. D., Jr.
1975-01-01
A method is presented for open loop guidance of a solar electric propulsion spacecraft to geosynchronous orbit. The method consists of determining the thrust vector profiles on the ground with an optimization computer program, and performing updates based on the difference between the actual trajectory and that predicted with a precision simulation computer program. The motivation for performing the guidance analysis during the mission planning phase is discussed, and a spacecraft design option that employs attitude orientation constraints is presented. The improvements required in both the optimization program and simulation program are set forth, together with the efforts to integrate the programs into the ground support software for the guidance system.
NASA Technical Reports Server (NTRS)
Cake, J. E.; Regetz, J. D., Jr.
1975-01-01
A method is presented for open loop guidance of a solar electric propulsion spacecraft to geosynchronsus orbit. The method consists of determining the thrust vector profiles on the ground with an optimization computer program, and performing updates based on the difference between the actual trajectory and that predicted with a precision simulation computer program. The motivation for performing the guidance analysis during the mission planning phase is discussed, and a spacecraft design option that employs attitude orientation constraints is presented. The improvements required in both the optimization program and simulation program are set forth, together with the efforts to integrate the programs into the ground support software for the guidance system.
Trajectories for High Specific Impulse High Specific Power Deep Space Exploration
NASA Technical Reports Server (NTRS)
Polsgrove, T.; Adams, R. B.; Brady, Hugh J. (Technical Monitor)
2002-01-01
Preliminary results are presented for two methods to approximate the mission performance of high specific impulse high specific power vehicles. The first method is based on an analytical approximation derived by Williams and Shepherd and can be used to approximate mission performance to outer planets and interstellar space. The second method is based on a parametric analysis of trajectories created using the well known trajectory optimization code, VARITOP. This parametric analysis allows the reader to approximate payload ratios and optimal power requirements for both one-way and round-trip missions. While this second method only addresses missions to and from Jupiter, future work will encompass all of the outer planet destinations and some interstellar precursor missions.
Constraining neutron guide optimizations with phase-space considerations
NASA Astrophysics Data System (ADS)
Bertelsen, Mads; Lefmann, Kim
2016-09-01
We introduce a method named the Minimalist Principle that serves to reduce the parameter space for neutron guide optimization when the required beam divergence is limited. The reduced parameter space will restrict the optimization to guides with a minimal neutron intake that are still theoretically able to deliver the maximal possible performance. The geometrical constraints are derived using phase-space propagation from moderator to guide and from guide to sample, while assuming that the optimized guides will achieve perfect transport of the limited neutron intake. Guide systems optimized using these constraints are shown to provide performance close to guides optimized without any constraints, however the divergence received at the sample is limited to the desired interval, even when the neutron transport is not limited by the supermirrors used in the guide. As the constraints strongly limit the parameter space for the optimizer, two control parameters are introduced that can be used to adjust the selected subspace, effectively balancing between maximizing neutron transport and avoiding background from unnecessary neutrons. One parameter is needed to describe the expected focusing abilities of the guide to be optimized, going from perfectly focusing to no correlation between position and velocity. The second parameter controls neutron intake into the guide, so that one can select exactly how aggressively the background should be limited. We show examples of guides optimized using these constraints which demonstrates the higher signal to noise than conventional optimizations. Furthermore the parameter controlling neutron intake is explored which shows that the simulated optimal neutron intake is close to the analytically predicted, when assuming that the guide is dominated by multiple scattering events.
Comprehensive Performance Nutrition for Special Operations Forces.
Daigle, Karen A; Logan, Christi M; Kotwal, Russ S
2015-01-01
Special Operations Forces (SOF) training, combat, and contingency operations are unique and demanding. Performance nutrition within the Department of Defense has emphasized that nutrition is relative to factors related to the desired outcome, which includes successful performance of mentally and physically demanding operations and missions of tactical and strategic importance, as well as nonoperational assignments. Discussed are operational, nonoperational, and patient categories that require different nutrition strategies to facilitate category-specific performance outcomes. Also presented are 10 major guidelines for a SOF comprehensive performance nutrition program, practical nutrition recommendations for Special Operators and medical providers, as well as resources for dietary supplement evaluation. Foundational health concepts, medical treatment, and task-specific performance factors should be considered when developing and systematically implementing a comprehensive SOF performance nutrition program. When tailored to organizational requirements, SOF unit- and culture-specific nutrition education and services can optimize individual Special Operator performance, overall unit readiness, and ultimately, mission success. 2015.
Parallel Aircraft Trajectory Optimization with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Gray, Justin S.; Naylor, Bret
2016-01-01
Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.
Characterization of Softmagnetic Thin Layers Using Barkhausen Noise Microscopy
2001-04-01
magnetoresistive (MR) sensors softmagnetic thin layer systems are used. Optimal performance of these layers requires homogeneous magnetic properties , especially a...Sendust, used in inductive sensors and nanocrystalline NiFe , used in MR-sensors. In quality correlations to Barkhausen noise parameters were found...Brillouin scattering are frequently used. An important issue is the influence of mechanical properties , e.g. residual stress on the magnetic performance
Optimal Output Trajectory Redesign for Invertible Systems
NASA Technical Reports Server (NTRS)
Devasia, S.
1996-01-01
Given a desired output trajectory, inversion-based techniques find input-state trajectories required to exactly track the output. These inversion-based techniques have been successfully applied to the endpoint tracking control of multijoint flexible manipulators and to aircraft control. The specified output trajectory uniquely determines the required input and state trajectories that are found through inversion. These input-state trajectories exactly track the desired output; however, they might not meet acceptable performance requirements. For example, during slewing maneuvers of flexible structures, the structural deformations, which depend on the required state trajectories, may be unacceptably large. Further, the required inputs might cause actuator saturation during an exact tracking maneuver, for example, in the flight control of conventional takeoff and landing aircraft. In such situations, a compromise is desired between the tracking requirement and other goals such as reduction of internal vibrations and prevention of actuator saturation; the desired output trajectory needs to redesigned. Here, we pose the trajectory redesign problem as an optimization of a general quadratic cost function and solve it in the context of linear systems. The solution is obtained as an off-line prefilter of the desired output trajectory. An advantage of our technique is that the prefilter is independent of the particular trajectory. The prefilter can therefore be precomputed, which is a major advantage over other optimization approaches. Previous works have addressed the issue of preshaping inputs to minimize residual and in-maneuver vibrations for flexible structures; Since the command preshaping is computed off-line. Further minimization of optimal quadratic cost functions has also been previously use to preshape command inputs for disturbance rejection. All of these approaches are applicable when the inputs to the system are known a priori. Typically, outputs (not inputs) are specified in tracking problems, and hence the input trajectories have to be computed. The inputs to the system are however, difficult to determine for non-minimum phase systems like flexible structures. One approach to solve this problem is to (1) choose a tracking controller (the desired output trajectory is now an input to the closed-loop system and (2) redesign this input to the closed-loop system. Thus we effectively perform output redesign. These redesigns are however, dependent on the choice of the tracking controllers. Thus the controller optimization and trajectory redesign problems become coupled; this coupled optimization is still an open problem. In contrast, we decouple the trajectory redesign problem from the choice of feedback-based tracking controller. It is noted that our approach remains valid when a particular tracking controller is chosen. In addition, the formulation of our problem not only allows for the minimization of residual vibration as in available techniques but also allows for the optimal reduction fo vibrations during the maneuver, e.g., the altitude control of flexible spacecraft. We begin by formulating the optimal output trajectory redesign problem and then solve it in the context of general linear systems. This theory is then applied to an example flexible structure, and simulation results are provided.
Two-Dimensional High-Lift Aerodynamic Optimization Using Neural Networks
NASA Technical Reports Server (NTRS)
Greenman, Roxana M.
1998-01-01
The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. The 'pressure difference rule,' which states that the maximum lift condition corresponds to a certain pressure difference between the peak suction pressure and the pressure at the trailing edge of the element, was applied and verified with experimental observations for this configuration. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural nets were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 44% compared with traditional gradient-based optimization procedures for multiple optimization runs.
Dynamic least-cost optimisation of wastewater system remedial works requirements.
Vojinovic, Z; Solomatine, D; Price, R K
2006-01-01
In recent years, there has been increasing concern for wastewater system failure and identification of optimal set of remedial works requirements. So far, several methodologies have been developed and applied in asset management activities by various water companies worldwide, but often with limited success. In order to fill the gap, there are several research projects that have been undertaken in exploring various algorithms to optimise remedial works requirements, but mostly for drinking water supply systems, and very limited work has been carried out for the wastewater assets. Some of the major deficiencies of commonly used methods can be found in either one or more of the following aspects: inadequate representation of systems complexity, incorporation of a dynamic model into the decision-making loop, the choice of an appropriate optimisation technique and experience in applying that technique. This paper is oriented towards resolving these issues and discusses a new approach for the optimisation of wastewater systems remedial works requirements. It is proposed that the optimal problem search is performed by a global optimisation tool (with various random search algorithms) and the system performance is simulated by the hydrodynamic pipe network model. The work on assembling all required elements and the development of an appropriate interface protocols between the two tools, aimed to decode the potential remedial solutions into the pipe network model and to calculate the corresponding scenario costs, is currently underway.
Expanding the PACS archive to support clinical review, research, and education missions
NASA Astrophysics Data System (ADS)
Honeyman-Buck, Janice C.; Frost, Meryll M.; Drane, Walter E.
1999-07-01
Designing an image archive and retrieval system that supports multiple users with many different requirements and patterns of use without compromising the performance and functionality required by diagnostic radiology is an intellectual and technical challenge. A diagnostic archive, optimized for performance when retrieving diagnostic images for radiologists needed to be expanded to support a growing clinical review network, the University of Florida Brain Institute's demands for neuro-imaging, Biomedical Engineering's imaging sciences, and an electronic teaching file. Each of the groups presented a different set of problems for the designers of the system. In addition, the radiologists did not want to see nay loss of performance as new users were added.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinones, Armando, Sr.; Bibeau, Tiffany A.; Ho, Clifford Kuofei
2008-08-01
Finite-element analyses were performed to simulate the response of a hypothetical vertical masonry wall subject to different lateral loads with and without continuous horizontal filament ties laid between rows of concrete blocks. A static loading analysis and cost comparison were also performed to evaluate optimal materials and designs for the spacers affixed to the filaments. Results showed that polypropylene, ABS, and polyethylene (high density) were suitable materials for the spacers based on performance and cost, and the short T-spacer design was optimal based on its performance and functionality. Simulations of vertical walls subject to static loads representing 100 mph windsmore » (0.2 psi) and a seismic event (0.66 psi) showed that the simulated walls performed similarly and adequately when subject to these loads with and without the ties. Additional simulations and tests are required to assess the performance of actual walls with and without the ties under greater loads and more realistic conditions (e.g., cracks, non-linear response).« less
Human Mars Ascent Vehicle Performance Sensitivities
NASA Technical Reports Server (NTRS)
Polsgrove, Tara P.; Thomas, Herbert D.
2016-01-01
Human Mars mission architecture studies have shown that the ascent vehicle mass drives performance requirements for the descent and in-space transportation elements. Understanding the sensitivity of Mars ascent vehicle (MAV) mass to various mission and vehicle design choices enables overall transportation system optimization. This paper presents the results of a variety of sensitivity trades affecting MAV performance including: landing site latitude, target orbit, initial thrust to weight ratio, staging options, specific impulse, propellant type and engine design.
CATO: a CAD tool for intelligent design of optical networks and interconnects
NASA Astrophysics Data System (ADS)
Chlamtac, Imrich; Ciesielski, Maciej; Fumagalli, Andrea F.; Ruszczyk, Chester; Wedzinga, Gosse
1997-10-01
Increasing communication speed requirements have created a great interest in very high speed optical and all-optical networks and interconnects. The design of these optical systems is a highly complex task, requiring the simultaneous optimization of various parts of the system, ranging from optical components' characteristics to access protocol techniques. Currently there are no computer aided design (CAD) tools on the market to support the interrelated design of all parts of optical communication systems, thus the designer has to rely on costly and time consuming testbed evaluations. The objective of the CATO (CAD tool for optical networks and interconnects) project is to develop a prototype of an intelligent CAD tool for the specification, design, simulation and optimization of optical communication networks. CATO allows the user to build an abstract, possible incomplete, model of the system, and determine its expected performance. Based on design constraints provided by the user, CATO will automatically complete an optimum design, using mathematical programming techniques, intelligent search methods and artificial intelligence (AI). Initial design and testing of a CATO prototype (CATO-1) has been completed recently. The objective was to prove the feasibility of combining AI techniques, simulation techniques, an optical device library and a graphical user interface into a flexible CAD tool for obtaining optimal communication network designs in terms of system cost and performance. CATO-1 is an experimental tool for designing packet-switching wavelength division multiplexing all-optical communication systems using a LAN/MAN ring topology as the underlying network. The two specific AI algorithms incorporated are simulated annealing and a genetic algorithm. CATO-1 finds the optimal number of transceivers for each network node, using an objective function that includes the cost of the devices and the overall system performance.
Kumyaito, Nattapon; Yupapin, Preecha; Tamee, Kreangsak
2018-01-08
An effective training plan is an important factor in sports training to enhance athletic performance. A poorly considered training plan may result in injury to the athlete, and overtraining. Good training plans normally require expert input, which may have a cost too great for many athletes, particularly amateur athletes. The objectives of this research were to create a practical cycling training plan that substantially improves athletic performance while satisfying essential physiological constraints. Adaptive Particle Swarm Optimization using ɛ-constraint methods were used to formulate such a plan and simulate the likely performance outcomes. The physiological constraints considered in this study were monotony, chronic training load ramp rate and daily training impulse. A comparison of results from our simulations against a training plan from British Cycling, which we used as our standard, showed that our training plan outperformed the benchmark in terms of both athletic performance and satisfying all physiological constraints.
Who Chokes Under Pressure? The Big Five Personality Traits and Decision-Making under Pressure.
Byrne, Kaileigh A; Silasi-Mansat, Crina D; Worthy, Darrell A
2015-02-01
The purpose of the present study was to examine whether the Big Five personality factors could predict who thrives or chokes under pressure during decision-making. The effects of the Big Five personality factors on decision-making ability and performance under social (Experiment 1) and combined social and time pressure (Experiment 2) were examined using the Big Five Personality Inventory and a dynamic decision-making task that required participants to learn an optimal strategy. In Experiment 1, a hierarchical multiple regression analysis showed an interaction between neuroticism and pressure condition. Neuroticism negatively predicted performance under social pressure, but did not affect decision-making under low pressure. Additionally, the negative effect of neuroticism under pressure was replicated using a combined social and time pressure manipulation in Experiment 2. These results support distraction theory whereby pressure taxes highly neurotic individuals' cognitive resources, leading to sub-optimal performance. Agreeableness also negatively predicted performance in both experiments.
NASA Technical Reports Server (NTRS)
Jenkins, R. M.
1983-01-01
The present effort represents an extension of previous work wherein a calculation model for performing rapid pitchline optimization of axial gas turbine geometry, including blade profiles, is developed. The model requires no specification of geometric constraints. Output includes aerodynamic performance (adiabatic efficiency), hub-tip flow-path geometry, blade chords, and estimates of blade shape. Presented herein is a verification of the aerodynamic performance portion of the model, whereby detailed turbine test-rig data, including rig geometry, is input to the model to determine whether tested performance can be predicted. An array of seven (7) NASA single-stage axial gas turbine configurations is investigated, ranging in size from 0.6 kg/s to 63.8 kg/s mass flow and in specific work output from 153 J/g to 558 J/g at design (hot) conditions; stage loading factor ranges from 1.15 to 4.66.
NASA Astrophysics Data System (ADS)
Bauer, Johannes; Dávila-Chacón, Jorge; Wermter, Stefan
2015-10-01
Humans and other animals have been shown to perform near-optimally in multi-sensory integration tasks. Probabilistic population codes (PPCs) have been proposed as a mechanism by which optimal integration can be accomplished. Previous approaches have focussed on how neural networks might produce PPCs from sensory input or perform calculations using them, like combining multiple PPCs. Less attention has been given to the question of how the necessary organisation of neurons can arise and how the required knowledge about the input statistics can be learned. In this paper, we propose a model of learning multi-sensory integration based on an unsupervised learning algorithm in which an artificial neural network learns the noise characteristics of each of its sources of input. Our algorithm borrows from the self-organising map the ability to learn latent-variable models of the input and extends it to learning to produce a PPC approximating a probability density function over the latent variable behind its (noisy) input. The neurons in our network are only required to perform simple calculations and we make few assumptions about input noise properties and tuning functions. We report on a neurorobotic experiment in which we apply our algorithm to multi-sensory integration in a humanoid robot to demonstrate its effectiveness and compare it to human multi-sensory integration on the behavioural level. We also show in simulations that our algorithm performs near-optimally under certain plausible conditions, and that it reproduces important aspects of natural multi-sensory integration on the neural level.
NASA Astrophysics Data System (ADS)
Hassan, Rania A.
In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.
Frame synchronization for the Galileo code
NASA Technical Reports Server (NTRS)
Arnold, S.; Swanson, L.
1991-01-01
Results are reported on the performance of the Deep Space Network's frame synchronizer for the (15,1/4) convolutional code after Viterbi decoding. The threshold is found that optimizes the probability of acquiring true sync within four frames using a strategy that requires next frame verification.
Monitoring Strategies in Permeable Pavement Systems to Optimize Maintenance Scheduling
As the surface in a permeable pavement system clogs and performance decreases, maintenance is required to preserve the design function. Currently, guidance is limited for scheduling maintenance on an as needed basis. Previous research has shown that surface clogging in a permea...
Monitoring Strategies in Permeable Pavement Systems to Optimize Maintenance Scheduling - abstract
As the surface in a permeable pavement system clogs and performance decreases, maintenance is required to preserve the design function. Currently, guidance is limited for scheduling maintenance on an as needed basis. Previous research has shown that surface clogging in a permea...
Code of Federal Regulations, 2010 CFR
2010-07-01
... modification of those projects to optimize performance. It includes the selection of appropriate measures that... fee program that are available for sale prior to being fulfilled in accordance with an approved mitigation project plan. Advance credit sales require an approved in-lieu fee program instrument that meets...
Code of Federal Regulations, 2010 CFR
2010-07-01
... guides modification of those projects to optimize performance. It includes the selection of appropriate... approved in-lieu fee program that are available for sale prior to being fulfilled in accordance with an approved mitigation project plan. Advance credit sales require an approved in-lieu fee program instrument...
Row-crop planter requirements to support variable-rate seeding of maize
USDA-ARS?s Scientific Manuscript database
Current planting technology possesses the ability to increase crop productivity and improve field efficiency by precisely metering and placing crop seeds. Planter performance depends on using the correct planter and technology setup which consists of determining optimal settings for different planti...
NASA Astrophysics Data System (ADS)
Lobanov, Nikolai R.; Tunningley, Thomas; Linardakis, Peter
2018-04-01
Tandem electrostatic accelerators often require the flexibility to operate at a variety of terminal voltages to accommodate various user requirements. However, the ion beam transmission will only be optimal for a limited range of terminal voltages. This paper describes the operational performance of a novel focusing system that expands the range of terminal voltages for optimal transmission. This is accomplished by controlling the gradient of the entrance of the low-energy tube, providing an additional focusing element. In this specific case it is achieved by applying up to 150 kV to the fifth electrode of the first unit of the accelerator tube. Numerical simulations and beam transmission tests have been performed to confirm the effectiveness of the lens. An analytical expression has been derived describing its focal properties. These tests demonstrate that the entrance lens control eliminates the need to short out sections of the tube for operation at low terminal voltage.
Multipurpose silicon photonics signal processor core.
Pérez, Daniel; Gasulla, Ivana; Crudgington, Lee; Thomson, David J; Khokhar, Ali Z; Li, Ke; Cao, Wei; Mashanovich, Goran Z; Capmany, José
2017-09-21
Integrated photonics changes the scaling laws of information and communication systems offering architectural choices that combine photonics with electronics to optimize performance, power, footprint, and cost. Application-specific photonic integrated circuits, where particular circuits/chips are designed to optimally perform particular functionalities, require a considerable number of design and fabrication iterations leading to long development times. A different approach inspired by electronic Field Programmable Gate Arrays is the programmable photonic processor, where a common hardware implemented by a two-dimensional photonic waveguide mesh realizes different functionalities through programming. Here, we report the demonstration of such reconfigurable waveguide mesh in silicon. We demonstrate over 20 different functionalities with a simple seven hexagonal cell structure, which can be applied to different fields including communications, chemical and biomedical sensing, signal processing, multiprocessor networks, and quantum information systems. Our work is an important step toward this paradigm.Integrated optical circuits today are typically designed for a few special functionalities and require complex design and development procedures. Here, the authors demonstrate a reconfigurable but simple silicon waveguide mesh with different functionalities.
Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)
NASA Technical Reports Server (NTRS)
Dalton, Shelly D.; Daley, Philip C.
1988-01-01
As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.
Compact FPGA hardware architecture for public key encryption in embedded devices
Morales-Sandoval, Miguel; Cumplido, René; Feregrino-Uribe, Claudia; Algredo-Badillo, Ignacio
2018-01-01
Security is a crucial requirement in the envisioned applications of the Internet of Things (IoT), where most of the underlying computing platforms are embedded systems with reduced computing capabilities and energy constraints. In this paper we present the design and evaluation of a scalable low-area FPGA hardware architecture that serves as a building block to accelerate the costly operations of exponentiation and multiplication in GF(p), commonly required in security protocols relying on public key encryption, such as in key agreement, authentication and digital signature. The proposed design can process operands of different size using the same datapath, which exhibits a significant reduction in area without loss of efficiency if compared to representative state of the art designs. For example, our design uses 96% less standard logic than a similar design optimized for performance, and 46% less resources than other design optimized for area. Even using fewer area resources, our design still performs better than its embedded software counterparts (190x and 697x). PMID:29360824
Effect of design selection on response surface performance
NASA Technical Reports Server (NTRS)
Carpenter, William C.
1993-01-01
The mathematical formulation of the engineering optimization problem is given. Evaluation of the objective function and constraint equations can be very expensive in a computational sense. Thus, it is desirable to use as few evaluations as possible in obtaining its solution. In solving the equation, one approach is to develop approximations to the objective function and/or restraint equations and then to solve the equation using the approximations in place of the original functions. These approximations are referred to as response surfaces. The desirability of using response surfaces depends upon the number of functional evaluations required to build the response surfaces compared to the number required in the direct solution of the equation without approximations. The present study is concerned with evaluating the performance of response surfaces so that a decision can be made as to their effectiveness in optimization applications. In particular, this study focuses on how the quality of approximations is effected by design selection. Polynomial approximations and neural net approximations are considered.
Compact FPGA hardware architecture for public key encryption in embedded devices.
Rodríguez-Flores, Luis; Morales-Sandoval, Miguel; Cumplido, René; Feregrino-Uribe, Claudia; Algredo-Badillo, Ignacio
2018-01-01
Security is a crucial requirement in the envisioned applications of the Internet of Things (IoT), where most of the underlying computing platforms are embedded systems with reduced computing capabilities and energy constraints. In this paper we present the design and evaluation of a scalable low-area FPGA hardware architecture that serves as a building block to accelerate the costly operations of exponentiation and multiplication in [Formula: see text], commonly required in security protocols relying on public key encryption, such as in key agreement, authentication and digital signature. The proposed design can process operands of different size using the same datapath, which exhibits a significant reduction in area without loss of efficiency if compared to representative state of the art designs. For example, our design uses 96% less standard logic than a similar design optimized for performance, and 46% less resources than other design optimized for area. Even using fewer area resources, our design still performs better than its embedded software counterparts (190x and 697x).
Displacement based multilevel structural optimization
NASA Technical Reports Server (NTRS)
Striz, Alfred G.
1995-01-01
Multidisciplinary design optimization (MDO) is expected to play a major role in the competitive transportation industries of tomorrow, i.e., in the design of aircraft and spacecraft, of high speed trains, boats, and automobiles. All of these vehicles require maximum performance at minimum weight to keep fuel consumption low and conserve resources. Here, MDO can deliver mathematically based design tools to create systems with optimum performance subject to the constraints of disciplines such as structures, aerodynamics, controls, etc. Although some applications of MDO are beginning to surface, the key to a widespread use of this technology lies in the improvement of its efficiency. This aspect is investigated here for the MDO subset of structural optimization, i.e., for the weight minimization of a given structure under size, strength, and displacement constraints. Specifically, finite element based multilevel optimization of structures (here, statically indeterminate trusses and beams for proof of concept) is performed. In the system level optimization, the design variables are the coefficients of assumed displacement functions, and the load unbalance resulting from the solution of the stiffness equations is minimized. Constraints are placed on the deflection amplitudes and the weight of the structure. In the subsystems level optimizations, the weight of each element is minimized under the action of stress constraints, with the cross sectional dimensions as design variables. This approach is expected to prove very efficient, especially for complex structures, since the design task is broken down into a large number of small and efficiently handled subtasks, each with only a small number of variables. This partitioning will also allow for the use of parallel computing, first, by sending the system and subsystems level computations to two different processors, ultimately, by performing all subsystems level optimizations in a massively parallel manner on separate processors. It is expected that the subsystems level optimizations can be further improved through the use of controlled growth, a method which reduces an optimization to a more efficient analysis with only a slight degradation in accuracy. The efficiency of all proposed techniques is being evaluated relative to the performance of the standard single level optimization approach where the complete structure is weight minimized under the action of all given constraints by one processor and to the performance of simultaneous analysis and design which combines analysis and optimization into a single step. It is expected that the present approach can be expanded to include additional structural constraints (buckling, free and forced vibration, etc.) or other disciplines (passive and active controls, aerodynamics, etc.) for true MDO.
Impacts of Intelligent Automated Quality Control on a Small Animal APD-Based Digital PET Scanner
NASA Astrophysics Data System (ADS)
Charest, Jonathan; Beaudoin, Jean-François; Bergeron, Mélanie; Cadorette, Jules; Arpin, Louis; Lecomte, Roger; Brunet, Charles-Antoine; Fontaine, Réjean
2016-10-01
Stable system performance is mandatory to warrant the accuracy and reliability of biological results relying on small animal positron emission tomography (PET) imaging studies. This simple requirement sets the ground for imposing routine quality control (QC) procedures to keep PET scanners at a reliable optimal performance level. However, such procedures can become burdensome to implement for scanner operators, especially taking into account the increasing number of data acquisition channels in newer generation PET scanners. In systems using pixel detectors to achieve enhanced spatial resolution and contrast-to-noise ratio (CNR), the QC workload rapidly increases to unmanageable levels due to the number of independent channels involved. An artificial intelligence based QC system, referred to as Scanner Intelligent Diagnosis for Optimal Performance (SIDOP), was proposed to help reducing the QC workload by performing automatic channel fault detection and diagnosis. SIDOP consists of four high-level modules that employ machine learning methods to perform their tasks: Parameter Extraction, Channel Fault Detection, Fault Prioritization, and Fault Diagnosis. Ultimately, SIDOP submits a prioritized faulty channel list to the operator and proposes actions to correct them. To validate that SIDOP can perform QC procedures adequately, it was deployed on a LabPET™ scanner and multiple performance metrics were extracted. After multiple corrections on sub-optimal scanner settings, a 8.5% (with a 95% confidence interval (CI) of [7.6, 9.3]) improvement in the CNR, a 17.0% (CI: [15.3, 18.7]) decrease of the uniformity percentage standard deviation, and a 6.8% gain in global sensitivity were observed. These results confirm that SIDOP can indeed be of assistance in performing QC procedures and restore performance to optimal figures.
Multi-point optimization of recirculation flow type casing treatment in centrifugal compressors
NASA Astrophysics Data System (ADS)
Tun, Min Thaw; Sakaguchi, Daisaku
2016-06-01
High-pressure ratio and wide operating range are highly required for a turbocharger in diesel engines. A recirculation flow type casing treatment is effective for flow range enhancement of centrifugal compressors. Two ring grooves on a suction pipe and a shroud casing wall are connected by means of an annular passage and stable recirculation flow is formed at small flow rates from the downstream groove toward the upstream groove through the annular bypass. The shape of baseline recirculation flow type casing is modified and optimized by using a multi-point optimization code with a metamodel assisted evolutionary algorithm embedding a commercial CFD code CFX from ANSYS. The numerical optimization results give the optimized design of casing with improving adiabatic efficiency in wide operating flow rate range. Sensitivity analysis of design parameters as a function of efficiency has been performed. It is found that the optimized casing design provides optimized recirculation flow rate, in which an increment of entropy rise is minimized at grooves and passages of the rotating impeller.
Optimal control of motorsport differentials
NASA Astrophysics Data System (ADS)
Tremlett, A. J.; Massaro, M.; Purdy, D. J.; Velenis, E.; Assadian, F.; Moore, A. P.; Halley, M.
2015-12-01
Modern motorsport limited slip differentials (LSD) have evolved to become highly adjustable, allowing the torque bias that they generate to be tuned in the corner entry, apex and corner exit phases of typical on-track manoeuvres. The task of finding the optimal torque bias profile under such varied vehicle conditions is complex. This paper presents a nonlinear optimal control method which is used to find the minimum time optimal torque bias profile through a lane change manoeuvre. The results are compared to traditional open and fully locked differential strategies, in addition to considering related vehicle stability and agility metrics. An investigation into how the optimal torque bias profile changes with reduced track-tyre friction is also included in the analysis. The optimal LSD profile was shown to give a performance gain over its locked differential counterpart in key areas of the manoeuvre where a quick direction change is required. The methodology proposed can be used to find both optimal passive LSD characteristics and as the basis of a semi-active LSD control algorithm.
NASA Astrophysics Data System (ADS)
Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro
2018-06-01
A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.
Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
2005-01-01
This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.
Siebers, Jeffrey V
2008-04-04
Monte Carlo (MC) is rarely used for IMRT plan optimization outside of research centres due to the extensive computational resources or long computation times required to complete the process. Time can be reduced by degrading the statistical precision of the MC dose calculation used within the optimization loop. However, this eventually introduces optimization convergence errors (OCEs). This study determines the statistical noise levels tolerated during MC-IMRT optimization under the condition that the optimized plan has OCEs <100 cGy (1.5% of the prescription dose) for MC-optimized IMRT treatment plans.Seven-field prostate IMRT treatment plans for 10 prostate patients are used in this study. Pre-optimization is performed for deliverable beams with a pencil-beam (PB) dose algorithm. Further deliverable-based optimization proceeds using: (1) MC-based optimization, where dose is recomputed with MC after each intensity update or (2) a once-corrected (OC) MC-hybrid optimization, where a MC dose computation defines beam-by-beam dose correction matrices that are used during a PB-based optimization. Optimizations are performed with nominal per beam MC statistical precisions of 2, 5, 8, 10, 15, and 20%. Following optimizer convergence, beams are re-computed with MC using 2% per beam nominal statistical precision and the 2 PTV and 10 OAR dose indices used in the optimization objective function are tallied. For both the MC-optimization and OC-optimization methods, statistical equivalence tests found that OCEs are less than 1.5% of the prescription dose for plans optimized with nominal statistical uncertainties of up to 10% per beam. The achieved statistical uncertainty in the patient for the 10% per beam simulations from the combination of the 7 beams is ~3% with respect to maximum dose for voxels with D>0.5D(max). The MC dose computation time for the OC-optimization is only 6.2 minutes on a single 3 Ghz processor with results clinically equivalent to high precision MC computations.
Proof of concept demonstration of optimal composite MRI endpoints for clinical trials.
Edland, Steven D; Ard, M Colin; Sridhar, Jaiashre; Cobia, Derin; Martersteck, Adam; Mesulam, M Marsel; Rogalski, Emily J
2016-09-01
Atrophy measures derived from structural MRI are promising outcome measures for early phase clinical trials, especially for rare diseases such as primary progressive aphasia (PPA), where the small available subject pool limits our ability to perform meaningfully powered trials with traditional cognitive and functional outcome measures. We investigated a composite atrophy index in 26 PPA participants with longitudinal MRIs separated by two years. Rogalski et al . [ Neurology 2014;83:1184-1191] previously demonstrated that atrophy of the left perisylvian temporal cortex (PSTC) is a highly sensitive measure of disease progression in this population and a promising endpoint for clinical trials. Using methods described by Ard et al . [ Pharmaceutical Statistics 2015;14:418-426], we constructed a composite atrophy index composed of a weighted sum of volumetric measures of 10 regions of interest within the left perisylvian cortex using weights that maximize signal-to-noise and minimize sample size required of trials using the resulting score. Sample size required to detect a fixed percentage slowing in atrophy in a two-year clinical trial with equal allocation of subjects across arms and 90% power was calculated for the PSTC and optimal composite surrogate biomarker endpoints. The optimal composite endpoint required 38% fewer subjects to detect the same percent slowing in atrophy than required by the left PSTC endpoint. Optimal composites can increase the power of clinical trials and increase the probability that smaller trials are informative, an observation especially relevant for PPA, but also for related neurodegenerative disorders including Alzheimer's disease.
Beyond ADA Accessibility Requirements: Meeting Seniors' Needs for Toilet Transfers.
Lee, Su Jin; Sanford, Jon; Calkins, Margaret; Melgen, Sarah; Endicott, Sarah; Phillips, Anjanette
2018-04-01
To identify the optimal spatial and dimensional requirements of grab bars that support independent and assisted transfers by older adults and their care providers. Although research has demonstrated that toilet grab bars based on the Americans with Disabilities Act (ADA) Accessibility Standards do not meet the needs of older adults, the specific dimensional requirements for alternative configurations are unknown. A two-phased study with older adults and care providers in residential facilities was conducted to determine the optimal requirements for grab bars. Seniors and caregivers in skilled nursing facilities performed transfers using a mock-up toilet. In Phase 1, participants evaluated three grab bar configurations to identify optimal characteristics for safety, ease of use, comfort, and helpfulness. These characteristics were then validated for using ability-matched samples in Phase 2. The optimal configuration derived in Phase 1 included fold-down grab bars on both sides of the toilet (14" from centerline [CL] of toilet, 32" above the floor, and extended a minimum of 6" in front of the toilet) with one side open and a sidewall 24" from CL of toilet on the other. Phase 2 feedback was significantly positive for independent and one-person transfers and somewhat lower, albeit still positive, for two-person transfers. The study provides substantial evidence that bilateral grab bars are significantly more effective than those that comply with current ADA Accessibility Standards. Findings provide specific spatial and dimensional attributes for grab bar configurations that would be most effective in senior facilities.
An engineering code to analyze hypersonic thermal management systems
NASA Technical Reports Server (NTRS)
Vangriethuysen, Valerie J.; Wallace, Clark E.
1993-01-01
Thermal loads on current and future aircraft are increasing and as a result are stressing the energy collection, control, and dissipation capabilities of current thermal management systems and technology. The thermal loads for hypersonic vehicles will be no exception. In fact, with their projected high heat loads and fluxes, hypersonic vehicles are a prime example of systems that will require thermal management systems (TMS) that have been optimized and integrated with the entire vehicle to the maximum extent possible during the initial design stages. This will not only be to meet operational requirements, but also to fulfill weight and performance constraints in order for the vehicle to takeoff and complete its mission successfully. To meet this challenge, the TMS can no longer be two or more entirely independent systems, nor can thermal management be an after thought in the design process, the typical pervasive approach in the past. Instead, a TMS that was integrated throughout the entire vehicle and subsequently optimized will be required. To accomplish this, a method that iteratively optimizes the TMS throughout the vehicle will not only be highly desirable, but advantageous in order to reduce the manhours normally required to conduct the necessary tradeoff studies and comparisons. A thermal management engineering computer code that is under development and being managed at Wright Laboratory, Wright-Patterson AFB, is discussed. The primary goal of the code is to aid in the development of a hypersonic vehicle TMS that has been optimized and integrated on a total vehicle basis.
Modern control techniques in active flutter suppression using a control moment gyro
NASA Technical Reports Server (NTRS)
Buchek, P. M.
1974-01-01
Development of organized synthesis techniques, using concepts of modern control theory was studied for the design of active flutter suppression systems for two and three-dimensional lifting surfaces, utilizing a control moment gyro (CMG) to generate the required control torques. Incompressible flow theory is assumed, with the unsteady aerodynamic forces and moments for arbitrary airfoil motion obtained by using the convolution integral based on Wagner's indicial lift function. Linear optimal control theory is applied to find particular optimal sets of gain values which minimize a quadratic performance function. The closed loop system's response to impulsive gust disturbances and the resulting control power requirements are investigated, and the system eigenvalues necessary to minimize the maximum value of control power are determined.
Optimal designs based on the maximum quasi-likelihood estimator
Shen, Gang; Hyun, Seung Won; Wong, Weng Kee
2016-01-01
We use optimal design theory and construct locally optimal designs based on the maximum quasi-likelihood estimator (MqLE), which is derived under less stringent conditions than those required for the MLE method. We show that the proposed locally optimal designs are asymptotically as efficient as those based on the MLE when the error distribution is from an exponential family, and they perform just as well or better than optimal designs based on any other asymptotically linear unbiased estimators such as the least square estimator (LSE). In addition, we show current algorithms for finding optimal designs can be directly used to find optimal designs based on the MqLE. As an illustrative application, we construct a variety of locally optimal designs based on the MqLE for the 4-parameter logistic (4PL) model and study their robustness properties to misspecifications in the model using asymptotic relative efficiency. The results suggest that optimal designs based on the MqLE can be easily generated and they are quite robust to mis-specification in the probability distribution of the responses. PMID:28163359
Analytical Approach to the Fuel Optimal Impulsive Transfer Problem Using Primer Vector Method
NASA Astrophysics Data System (ADS)
Fitrianingsih, E.; Armellin, R.
2018-04-01
One of the objectives of mission design is selecting an optimum orbital transfer which often translated as a transfer which requires minimum propellant consumption. In order to assure the selected trajectory meets the requirement, the optimality of transfer should first be analyzed either by directly calculating the ΔV of the candidate trajectories and select the one that gives a minimum value or by evaluating the trajectory according to certain criteria of optimality. The second method is performed by analyzing the profile of the modulus of the thrust direction vector which is known as primer vector. Both methods come with their own advantages and disadvantages. However, it is possible to use the primer vector method to verify if the result from the direct method is truly optimal or if the ΔV can be reduced further by implementing correction maneuver to the reference trajectory. In addition to its capability to evaluate the transfer optimality without the need to calculate the transfer ΔV, primer vector also enables us to identify the time and position to apply correction maneuver in order to optimize a non-optimum transfer. This paper will present the analytical approach to the fuel optimal impulsive transfer using primer vector method. The validity of the method is confirmed by comparing the result to those from the numerical method. The investigation of the optimality of direct transfer is used to give an example of the application of the method. The case under study is the prograde elliptic transfers from Earth to Mars. The study enables us to identify the optimality of all the possible transfers.
Optimizing R with SparkR on a commodity cluster for biomedical research.
Sedlmayr, Martin; Würfl, Tobias; Maier, Christian; Häberle, Lothar; Fasching, Peter; Prokosch, Hans-Ulrich; Christoph, Jan
2016-12-01
Medical researchers are challenged today by the enormous amount of data collected in healthcare. Analysis methods such as genome-wide association studies (GWAS) are often computationally intensive and thus require enormous resources to be performed in a reasonable amount of time. While dedicated clusters and public clouds may deliver the desired performance, their use requires upfront financial efforts or anonymous data, which is often not possible for preliminary or occasional tasks. We explored the possibilities to build a private, flexible cluster for processing scripts in R based on commodity, non-dedicated hardware of our department. For this, a GWAS-calculation in R on a single desktop computer, a Message Passing Interface (MPI)-cluster, and a SparkR-cluster were compared with regards to the performance, scalability, quality, and simplicity. The original script had a projected runtime of three years on a single desktop computer. Optimizing the script in R already yielded a significant reduction in computing time (2 weeks). By using R-MPI and SparkR, we were able to parallelize the computation and reduce the time to less than three hours (2.6 h) on already available, standard office computers. While MPI is a proven approach in high-performance clusters, it requires rather static, dedicated nodes. SparkR and its Hadoop siblings allow for a dynamic, elastic environment with automated failure handling. SparkR also scales better with the number of nodes in the cluster than MPI due to optimized data communication. R is a popular environment for clinical data analysis. The new SparkR solution offers elastic resources and allows supporting big data analysis using R even on non-dedicated resources with minimal change to the original code. To unleash the full potential, additional efforts should be invested to customize and improve the algorithms, especially with regards to data distribution. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Improving the performance of surgery-based clinical pathways: a simulation-optimization approach.
Ozcan, Yasar A; Tànfani, Elena; Testi, Angela
2017-03-01
This paper aims to improve the performance of clinical processes using clinical pathways (CPs). The specific goal of this research is to develop a decision support tool, based on a simulation-optimization approach, which identify the proper adjustment and alignment of resources to achieve better performance for both the patients and the health-care facility. When multiple perspectives are present in a decision problem, critical issues arise and often require the balancing of goals. In our approach, meeting patients' clinical needs in a timely manner, and to avoid worsening of clinical conditions, we assess the level of appropriate resources. The simulation-optimization model seeks and evaluates alternative resource configurations aimed at balancing the two main objectives-meeting patient needs and optimal utilization of beds and operating rooms.Using primary data collected at a Department of Surgery of a public hospital located in Genoa, Italy. The simulation-optimization modelling approach in this study has been applied to evaluate the thyroid surgical treatment together with the other surgery-based CPs. The low rate of bed utilization and the long elective waiting lists of the specialty under study indicates that the wards were oversized while the operating room capacity was the bottleneck of the system. The model enables hospital managers determine which objective has to be given priority, as well as the corresponding opportunity costs.
Optimization of Microelectronic Devices for Sensor Applications
NASA Technical Reports Server (NTRS)
Cwik, Tom; Klimeck, Gerhard
2000-01-01
The NASA/JPL goal to reduce payload in future space missions while increasing mission capability demands miniaturization of active and passive sensors, analytical instruments and communication systems among others. Currently, typical system requirements include the detection of particular spectral lines, associated data processing, and communication of the acquired data to other systems. Advances in lithography and deposition methods result in more advanced devices for space application, while the sub-micron resolution currently available opens a vast design space. Though an experimental exploration of this widening design space-searching for optimized performance by repeated fabrication efforts-is unfeasible, it does motivate the development of reliable software design tools. These tools necessitate models based on fundamental physics and mathematics of the device to accurately model effects such as diffraction and scattering in opto-electronic devices, or bandstructure and scattering in heterostructure devices. The software tools must have convenient turn-around times and interfaces that allow effective usage. The first issue is addressed by the application of high-performance computers and the second by the development of graphical user interfaces driven by properly developed data structures. These tools can then be integrated into an optimization environment, and with the available memory capacity and computational speed of high performance parallel platforms, simulation of optimized components can proceed. In this paper, specific applications of the electromagnetic modeling of infrared filtering, as well as heterostructure device design will be presented using genetic algorithm global optimization methods.
Study of advanced atmospheric entry systems for Mars
NASA Technical Reports Server (NTRS)
1978-01-01
Entry system designs are described for various advanced Mars missions including sample return, hard lander, and Mars airplane. The Mars exploration systems for sample return and the hard lander require decleration from direct approach entry velocities of about 6 km/s to terminal velocities consistent with surface landing requirements. The Mars airplane entry system is decelerated from orbit at 4.6 km/s to deployment near the surface. Mass performance characteristics of major elements of the Mass performance characteristics are estimated for the major elements of the required entry systems using Viking technology or logical extensions of technology in order to provide a common basis of comparison for the three entry modes mission mode approaches. The entry systems, although not optimized, are based on Viking designs and reflect current hardware performance capability and realistic mass relationships.
Design and optimization of all-optical networks
NASA Astrophysics Data System (ADS)
Xiao, Gaoxi
1999-10-01
In this thesis, we present our research results on the design and optimization of all-optical networks. We divide our results into the following four parts: 1.In the first part, we consider broadcast-and-select networks. In our research, we propose an alternative and cheaper network configuration to hide the tuning time. In addition, we derive lower bounds on the optimal schedule lengths and prove that they are tighter than the best existing bounds. 2.In the second part, we consider all-optical wide area networks. We propose a set of algorithms for allocating a given number of WCs to the nodes. We adopt a simulation-based optimization approach, in which we collect utilization statistics of WCs from computer simulation and then perform optimization to allocate the WCs. Therefore, our algorithms are widely applicable and they are not restricted to any particular model and assumption. We have conducted extensive computer simulation on regular and irregular networks under both uniform and non-uniform traffic. We see that our method can get nearly the same performance as that of full wavelength conversion by using a much smaller number of WCs. Compared with the best existing method, the results show that our algorithms can significantly reduce (1)the overall blocking probability (i.e., better mean quality of service) and (2)the maximum of the blocking probabilities experienced at all the source nodes (i.e., better fairness). Equivalently, for a given performance requirement on blocking probability, our algorithms can significantly reduce the number of WCs required. 3.In the third part, we design and optimize the physical topology of all-optical wide area networks. We show that the design problem is NP-complete and we propose a heuristic algorithm called two-stage cut saturation algorithm for this problem. Simulation results show that (1)the proposed algorithm can efficiently design networks with low cost and high utilization, and (2)if wavelength converters are available to support full wavelength conversion, the cost of the links can be significantly reduced. 4.In the fourth part, we consider all-optical wide area networks with multiple fibers per link. We design a node configuration for all-optical networks. We exploit the flexibility that, to establish a lightpath across a node, we can select any one of the available channels in the incoming link and any one of the available channels in the outgoing link. As a result, the proposed node configuration requires a small number of small optical switches while it can achieve nearly the same performance as the existing one. And there is no additional crosstalk other than the intrinsic crosstalk within each single-chip optical switch.* (Abstract shortened by UMI.) *Originally published in DAI Vol. 60, No. 2. Reprinted here with corrected author name.
Transonic airfoil design for helicopter rotor applications
NASA Technical Reports Server (NTRS)
Hassan, Ahmed A.; Jackson, B.
1989-01-01
Despite the fact that the flow over a rotor blade is strongly influenced by locally three-dimensional and unsteady effects, practical experience has always demonstrated that substantial improvements in the aerodynamic performance can be gained by improving the steady two-dimensional charateristics of the airfoil(s) employed. The two phenomena known to have great impact on the overall rotor performance are: (1) retreating blade stall with the associated large pressure drag, and (2) compressibility effects on the advancing blade leading to shock formation and the associated wave drag and boundary-layer separation losses. It was concluded that: optimization routines are a powerful tool for finding solutions to multiple design point problems; the optimization process must be guided by the judicious choice of geometric and aerodynamic constraints; optimization routines should be appropriately coupled to viscous, not inviscid, transonic flow solvers; hybrid design procedures in conjunction with optimization routines represent the most efficient approach for rotor airfroil design; unsteady effects resulting in the delay of lift and moment stall should be modeled using simple empirical relations; and inflight optimization of aerodynamic loads (e.g., use of variable rate blowing, flaps, etc.) can satisfy any number of requirements at design and off-design conditions.
Design optimization using adjoint of Long-time LES for the trailing edge of a transonic turbine vane
NASA Astrophysics Data System (ADS)
Talnikar, Chaitanya; Wang, Qiqi
2017-11-01
Adjoint-based design optimization methods have been applied to low-fidelity simulation methods like Reynolds Averaged Navier-Stokes (RANS) and are useful for designing fluid machinery components. But to reliably capture the complex flow phenomena involved in turbomachinery, high fidelity simulations like large eddy simulation (LES) are required. Unfortunately due to the chaotic dynamics of turbulence, the unsteady adjoint method for LES diverges and produces incorrect gradients. Using a viscosity stabilized unsteady adjoint method developed for LES, the gradient can be obtained with reasonable accuracy. In this paper, design of the trailing edge of a gas turbine inlet guide vane is performed with the objective to reduce stagnation pressure loss and heat transfer over the surface of the vane. Slight changes in the shape of trailing edge can significantly impact these quantities by altering the boundary layer development process and separation points. The trailing edge is parameterized using a linear combination of 5 convex designs. Bayesian optimization is used as a global optimizer with the objective function evaluated from the LES and gradients obtained using the viscosity adjoint method. Results from the optimization, performed on the supercomputer Mira, are presented.
Cell wall-bound silicon optimizes ammonium uptake and metabolism in rice cells.
Sheng, Huachun; Ma, Jie; Pu, Junbao; Wang, Lijun
2018-05-16
Turgor-driven plant cell growth depends on cell wall structure and mechanics. Strengthening of cell walls on the basis of an association and interaction with silicon (Si) could lead to improved nutrient uptake and optimized growth and metabolism in rice (Oryza sativa). However, the structural basis and physiological mechanisms of nutrient uptake and metabolism optimization under Si assistance remain obscure. Single-cell level biophysical measurements, including in situ non-invasive micro-testing (NMT) of NH4+ ion fluxes, atomic force microscopy (AFM) of cell walls, and electrolyte leakage and membrane potential, as well as whole-cell proteomics using isobaric tags for relative and absolute quantification (iTRAQ), were performed. The altered cell wall structure increases the uptake rate of the main nutrient NH4+ in Si-accumulating cells, whereas the rate is only half in Si-deprived counterparts. Rigid cell walls enhanced by a wall-bound form of Si as the structural basis stabilize cell membranes. This, in turn, optimizes nutrient uptake of the cells in the same growth phase without any requirement for up-regulation of transmembrane ammonium transporters. Optimization of cellular nutrient acquisition strategies can substantially improve performance in terms of growth, metabolism and stress resistance.
NASA Astrophysics Data System (ADS)
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2012-09-01
The global rise in energy demands brings major obstacles to many energy organizations in providing adequate energy supply. Hence, many techniques to generate cost effective, reliable and environmentally friendly alternative energy source are being explored. One such method is the integration of photovoltaic cells, wind turbine generators and fuel-based generators, included with storage batteries. This sort of power systems are known as distributed generation (DG) power system. However, the application of DG power systems raise certain issues such as cost effectiveness, environmental impact and reliability. The modelling as well as the optimization of this DG power system was successfully performed in the previous work using Particle Swarm Optimization (PSO). The central idea of that work was to minimize cost, minimize emissions and maximize reliability (multi-objective (MO) setting) with respect to the power balance and design requirements. In this work, we introduce a fuzzy model that takes into account the uncertain nature of certain variables in the DG system which are dependent on the weather conditions (such as; the insolation and wind speed profiles). The MO optimization in a fuzzy environment was performed by applying the Hopfield Recurrent Neural Network (HNN). Analysis on the optimized results was then carried out.
Preliminary Work for Examining the Scalability of Reinforcement Learning
NASA Technical Reports Server (NTRS)
Clouse, Jeff
1998-01-01
Researchers began studying automated agents that learn to perform multiple-step tasks early in the history of artificial intelligence (Samuel, 1963; Samuel, 1967; Waterman, 1970; Fikes, Hart & Nilsonn, 1972). Multiple-step tasks are tasks that can only be solved via a sequence of decisions, such as control problems, robotics problems, classic problem-solving, and game-playing. The objective of agents attempting to learn such tasks is to use the resources they have available in order to become more proficient at the tasks. In particular, each agent attempts to develop a good policy, a mapping from states to actions, that allows it to select actions that optimize a measure of its performance on the task; for example, reducing the number of steps necessary to complete the task successfully. Our study focuses on reinforcement learning, a set of learning techniques where the learner performs trial-and-error experiments in the task and adapts its policy based on the outcome of those experiments. Much of the work in reinforcement learning has focused on a particular, simple representation, where every problem state is represented explicitly in a table, and associated with each state are the actions that can be chosen in that state. A major advantage of this table lookup representation is that one can prove that certain reinforcement learning techniques will develop an optimal policy for the current task. The drawback is that the representation limits the application of reinforcement learning to multiple-step tasks with relatively small state-spaces. There has been a little theoretical work that proves that convergence to optimal solutions can be obtained when using generalization structures, but the structures are quite simple. The theory says little about complex structures, such as multi-layer, feedforward artificial neural networks (Rumelhart & McClelland, 1986), but empirical results indicate that the use of reinforcement learning with such structures is promising. These empirical results make no theoretical claims, nor compare the policies produced to optimal policies. A goal of our work is to be able to make the comparison between an optimal policy and one stored in an artificial neural network. A difficulty of performing such a study is finding a multiple-step task that is small enough that one can find an optimal policy using table lookup, yet large enough that, for practical purposes, an artificial neural network is really required. We have identified a limited form of the game OTHELLO as satisfying these requirements. The work we report here is in the very preliminary stages of research, but this paper provides background for the problem being studied and a description of our initial approach to examining the problem. In the remainder of this paper, we first describe reinforcement learning in more detail. Next, we present the game OTHELLO. Finally we argue that a restricted form of the game meets the requirements of our study, and describe our preliminary approach to finding an optimal solution to the problem.
Computational complexities and storage requirements of some Riccati equation solvers
NASA Technical Reports Server (NTRS)
Utku, Senol; Garba, John A.; Ramesh, A. V.
1989-01-01
The linear optimal control problem of an nth-order time-invariant dynamic system with a quadratic performance functional is usually solved by the Hamilton-Jacobi approach. This leads to the solution of the differential matrix Riccati equation with a terminal condition. The bulk of the computation for the optimal control problem is related to the solution of this equation. There are various algorithms in the literature for solving the matrix Riccati equation. However, computational complexities and storage requirements as a function of numbers of state variables, control variables, and sensors are not available for all these algorithms. In this work, the computational complexities and storage requirements for some of these algorithms are given. These expressions show the immensity of the computational requirements of the algorithms in solving the Riccati equation for large-order systems such as the control of highly flexible space structures. The expressions are also needed to compute the speedup and efficiency of any implementation of these algorithms on concurrent machines.
Kinematics and constraints associated with swashplate blade pitch control
NASA Technical Reports Server (NTRS)
Leyland, Jane A.
1993-01-01
An important class of techniques to reduce helicopter vibration is based on using a Higher Harmonic controller to optimally define the Higher Harmonic blade pitch. These techniques typically require solution of a general optimization problem requiring the determination of a control vector which minimizes a performance index where functions of the control vector are subject to inequality constraints. Six possible constraint functions associated with swashplate blade pitch control were identified and defined. These functions constrain: (1) blade pitch Fourier Coefficients expressed in the Rotating System, (2) blade pitch Fourier Coefficients expressed in the Nonrotating System, (3) stroke of the individual actuators expressed in the Nonrotating System, (4) blade pitch expressed as a function of blade azimuth and actuator stroke, (5) time rate-of-change of the aforementioned parameters, and (6) required actuator power. The aforementioned constraints and the associated kinematics of swashplate blade pitch control by means of the strokes of the individual actuators are documented.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
NASA Technical Reports Server (NTRS)
Zubair, Mohammad; Nielsen, Eric; Luitjens, Justin; Hammond, Dana
2016-01-01
In the field of computational fluid dynamics, the Navier-Stokes equations are often solved using an unstructuredgrid approach to accommodate geometric complexity. Implicit solution methodologies for such spatial discretizations generally require frequent solution of large tightly-coupled systems of block-sparse linear equations. The multicolor point-implicit solver used in the current work typically requires a significant fraction of the overall application run time. In this work, an efficient implementation of the solver for graphics processing units is proposed. Several factors present unique challenges to achieving an efficient implementation in this environment. These include the variable amount of parallelism available in different kernel calls, indirect memory access patterns, low arithmetic intensity, and the requirement to support variable block sizes. In this work, the solver is reformulated to use standard sparse and dense Basic Linear Algebra Subprograms (BLAS) functions. However, numerical experiments show that the performance of the BLAS functions available in existing CUDA libraries is suboptimal for matrices representative of those encountered in actual simulations. Instead, optimized versions of these functions are developed. Depending on block size, the new implementations show performance gains of up to 7x over the existing CUDA library functions.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.
Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receivingmore » at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.« less
Idris, Hajara; Junaidu, Sahalu B.; Adewumi, Aderemi O.
2017-01-01
The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user’s Quality of Service (QoS) requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO) algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user’s QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time. PMID:28545075
NASA Astrophysics Data System (ADS)
Kalatzis, Fanis G.; Papageorgiou, Dimitrios G.; Demetropoulos, Ioannis N.
2006-09-01
The Merlin/MCL optimization environment and the GAMESS-US package were combined so as to offer an extended and efficient quantum chemistry optimization system, capable of implementing complex optimization strategies for generic molecular modeling problems. A communication and data exchange interface was established between the two packages exploiting all Merlin features such as multiple optimizers, box constraints, user extensions and a high level programming language. An important feature of the interface is its ability to perform dimer computations by eliminating the basis set superposition error using the counterpoise (CP) method of Boys and Bernardi. Furthermore it offers CP-corrected geometry optimizations using analytic derivatives. The unified optimization environment was applied to construct portions of the intermolecular potential energy surface of the weakly bound H-bonded complex C 6H 6-H 2O by utilizing the high level Merlin Control Language. The H-bonded dimer HF-H 2O was also studied by CP-corrected geometry optimization. The ab initio electronic structure energies were calculated using the 6-31G ** basis set at the Restricted Hartree-Fock and second-order Moller-Plesset levels, while all geometry optimizations were carried out using a quasi-Newton algorithm provided by Merlin. Program summaryTitle of program: MERGAM Catalogue identifier:ADYB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYB_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: The program is designed for machines running the UNIX operating system. It has been tested on the following architectures: IA32 (Linux with gcc/g77 v.3.2.3), AMD64 (Linux with the Portland group compilers v.6.0), SUN64 (SunOS 5.8 with the Sun Workshop compilers v.5.2) and SGI64 (IRIX 6.5 with the MIPSpro compilers v.7.4) Installations: University of Ioannina, Greece Operating systems or monitors under which the program has been tested: UNIX Programming language used: ANSI C, ANSI Fortran-77 No. of lines in distributed program, including test data, etc.:11 282 No. of bytes in distributed program, including test data, etc.: 49 458 Distribution format: tar.gz Memory required to execute with typical data: Memory requirements mainly depend on the selection of a GAMESS-US basis set and the number of atoms No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: no Nature of physical problem: Multidimensional geometry optimization is of great importance in any ab initio calculation since it usually is one of the most CPU-intensive tasks, especially on large molecular systems. For example, the geometric and energetic description of van der Waals and weakly bound H-bonded complexes requires the construction of related important portions of the multidimensional intermolecular potential energy surface (IPES). So the various held views about the nature of these bonds can be quantitatively tested. Method of solution: The Merlin/MCL optimization environment was interconnected with the GAMESS-US package to facilitate geometry optimization in quantum chemistry problems. The important portions of the IPES require the capability to program optimization strategies. The Merlin/MCL environment was used for the implementation of such strategies. In this work, a CP-corrected geometry optimization was performed on the HF-H 2O complex and an MCL program was developed to study portions of the potential energy surface of the C 6H 6-H 2O complex. Restrictions on the complexity of the problem: The Merlin optimization environment and the GAMESS-US package must be installed. The MERGAM interface requires GAMESS-US input files that have been constructed in Cartesian coordinates. This restriction occurs from a design-time requirement to not allow reorientation of atomic coordinates; this rule holds always true when applying the COORD = UNIQUE keyword in a GAMESS-US input file. Typical running time: It depends on the size of the molecular system, the size of the basis set and the method of electron correlation. Execution of the test run took approximately 5 min on a 2.8 GHz Intel Pentium CPU.
Prospective regularization design in prior-image-based reconstruction
NASA Astrophysics Data System (ADS)
Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2015-12-01
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.
NASA Astrophysics Data System (ADS)
Ivanyukhin, A. V.; Petukhov, V. G.
2016-12-01
The problem of optimizing the interplanetary trajectories of a spacecraft (SC) with a solar electric propulsion system (SEPS) is examined. The problem of investigating the permissible power minimum of the solar electric propulsion power plant required for a successful flight is studied. Permissible ranges of thrust and exhaust velocity are analyzed for the given range of flight time and final mass of the spacecraft. The optimization is performed according to Portnyagin's maximum principle, and the continuation method is used for reducing the boundary problem of maximal principle to the Cauchy problem and to study the solution/ parameters dependence. Such a combination results in the robust algorithm that reduces the problem of trajectory optimization to the numerical integration of differential equations by the continuation method.