Science.gov

Sample records for adaptive performance optimization

  1. Adaptive Optimization of Aircraft Engine Performance Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Long, Theresa W.

    1995-01-01

    Preliminary results are presented on the development of an adaptive neural network based control algorithm to enhance aircraft engine performance. This work builds upon a previous National Aeronautics and Space Administration (NASA) effort known as Performance Seeking Control (PSC). PSC is an adaptive control algorithm which contains a model of the aircraft's propulsion system which is updated on-line to match the operation of the aircraft's actual propulsion system. Information from the on-line model is used to adapt the control system during flight to allow optimal operation of the aircraft's propulsion system (inlet, engine, and nozzle) to improve aircraft engine performance without compromising reliability or operability. Performance Seeking Control has been shown to yield reductions in fuel flow, increases in thrust, and reductions in engine fan turbine inlet temperature. The neural network based adaptive control, like PSC, will contain a model of the propulsion system which will be used to calculate optimal control commands on-line. Hopes are that it will be able to provide some additional benefits above and beyond those of PSC. The PSC algorithm is computationally intensive, it is valid only at near steady-state flight conditions, and it has no way to adapt or learn on-line. These issues are being addressed in the development of the optimal neural controller. Specialized neural network processing hardware is being developed to run the software, the algorithm will be valid at steady-state and transient conditions, and will take advantage of the on-line learning capability of neural networks. Future plans include testing the neural network software and hardware prototype against an aircraft engine simulation. In this paper, the proposed neural network software and hardware is described and preliminary neural network training results are presented.

  2. Optimizing aircraft performance with adaptive, integrated flight/propulsion control

    NASA Technical Reports Server (NTRS)

    Smith, R. H.; Chisholm, J. D.; Stewart, J. F.

    1991-01-01

    The Performance-Seeking Control (PSC) integrated flight/propulsion adaptive control algorithm presented was developed in order to optimize total aircraft performance during steady-state engine operation. The PSC multimode algorithm minimizes fuel consumption at cruise conditions, while maximizing excess thrust during aircraft accelerations, climbs, and dashes, and simultaneously extending engine service life through reduction of fan-driving turbine inlet temperature upon engagement of the extended-life mode. The engine models incorporated by the PSC are continually upgraded, using a Kalman filter to detect anomalous operations. The PSC algorithm will be flight-demonstrated by an F-15 at NASA-Dryden.

  3. A concept for adaptive performance optimization on commercial transport aircraft

    NASA Technical Reports Server (NTRS)

    Jackson, Michael R.; Enns, Dale F.

    1995-01-01

    An adaptive control method is presented for the minimization of drag during flight for transport aircraft. The minimization of drag is achieved by taking advantage of the redundant control capability available in the pitch axis, with the horizontal tail used as the primary surface and symmetric deflection of the ailerons and cruise flaps used as additional controls. The additional control surfaces are excited with sinusoidal signals, while the altitude and velocity loops are closed with guidance and control laws. A model of the throttle response as a function of the additional control surfaces is formulated and the parameters in the model are estimated from the sensor measurements using a least squares estimation method. The estimated model is used to determine the minimum drag positions of the control surfaces. The method is presented for the optimization of one and two additional control surfaces. The adaptive control method is extended to optimize rate of climb with the throttle fixed. Simulations that include realistic disturbances are presented, as well as the results of a Monte Carlo simulation analysis that shows the effects of changing the disturbance environment and the excitation signal parameters.

  4. Direct adaptive performance optimization of subsonic transports: A periodic perturbation technique

    NASA Technical Reports Server (NTRS)

    Espana, Martin D.; Gilyard, Glenn

    1995-01-01

    Aircraft performance can be optimized at the flight condition by using available redundancy among actuators. Effective use of this potential allows improved performance beyond limits imposed by design compromises. Optimization based on nominal models does not result in the best performance of the actual aircraft at the actual flight condition. An adaptive algorithm for optimizing performance parameters, such as speed or fuel flow, in flight based exclusively on flight data is proposed. The algorithm is inherently insensitive to model inaccuracies and measurement noise and biases and can optimize several decision variables at the same time. An adaptive constraint controller integrated into the algorithm regulates the optimization constraints, such as altitude or speed, without requiring and prior knowledge of the autopilot design. The algorithm has a modular structure which allows easy incorporation (or removal) of optimization constraints or decision variables to the optimization problem. An important part of the contribution is the development of analytical tools enabling convergence analysis of the algorithm and the establishment of simple design rules. The fuel-flow minimization and velocity maximization modes of the algorithm are demonstrated on the NASA Dryden B-720 nonlinear flight simulator for the single- and multi-effector optimization cases.

  5. Design and Performance Optimization of GeoFEST for Adaptive Geophysical Modeling on High Performance Computers

    NASA Astrophysics Data System (ADS)

    Norton, C. D.; Parker, J. W.; Lyzenga, G. A.; Glasscoe, M. T.; Donnellan, A.

    2006-12-01

    The Geophysical Finite Element Simulation Tool (GeoFEST) and the PYRAMID parallel adaptive mesh refinement library have been integrated to provide high performance and high resolution modeling of 3D Earth crustal deformation under tectonic loading associated with the Earthquake cycle. This includes co-seismic and post-seismic modeling capabilities as well as other problems of geophysical interest. The use of the PYRAMID AMR library has allowed simulations of tens of millions of elements on various parallel computers where strain energy is applied as the error estimation criterion. This has allowed for improved generation of time-dependent simulations where the computational effort can be localized to geophysical regions of most activity. This talk will address techniques including conversion of the sequential GeoFEST software to a parallel version using PYRAMID, performance optimization and various lessons learned as part of porting such software to various parallel systems including Linux Clusters, SGI Altix systems, and Apple G5 XServe systems. We will also describe how the software has been applied in modeling of post-seismic deformation studies of the Landers and Northridge earthquake events.

  6. Lockheed L-1011 Test Station installation in support of the Adaptive Performance Optimization flight

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Technicians John Huffman, Phil Gonia and Mike Kerner of NASA's Dryden Flight Research Center, Edwards, California, carefully insert a monitor into the Research Engineering Test Station during installation of equipment for the Adaptive Performance Optimization experiment aboard Orbital Sciences Corporation's Lockheed L-1011 in Bakersfield, California, May, 6, 1997. The Adaptive Performance Optimization project is designed to reduce the aerodynamic drag of large subsonic transport aircraft by varying the camber of the wing through real-time adjustment of flaps or ailerons in response to changing flight conditions. Reducing the drag will improve aircraft efficiency and performance, resulting in signifigant fuel savings for the nation's airlines worth hundreds of millions of dollars annually. Flights for the NASA experiment will occur periodically over the next couple of years on the modified wide-bodied jetliner, with all flights flown out of Bakersfield's Meadows Field. The experiment is part of Dryden's Advanced Subsonic Transport Aircraft Research program.

  7. Lockheed L-1011 TriStar first flight to support Adaptive Performance Optimization study

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Bearing the logos of the National Aeronautics and Space Administration and Orbital Sciences Corporation, Orbital's L-1011 Tristar lifts off the Meadows Field Runway at Bakersfield, California, on its first flight May 21, 1997, in NASA's Adaptive Performance Optimization project. Developed by engineers at NASA's Dryden Flight Research Center, Edwards, California, the experiment seeks to reduce fuel consumption of large jetliners by improving the aerodynamic efficency of their wings at cruise conditions. A research computer employing a sophisticated software program adapts to changing flight conditions by commanding small movements of the L-1011's outboard ailerons to give the wings the most efficient - or optimal - airfoil. Up to a dozen research flights will be flown in the current and follow-on phases of the project over the next couple years.

  8. In-flight adaptive performance optimization (APO) control using redundant control effectors of an aircraft

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn B. (Inventor)

    1999-01-01

    Practical application of real-time (or near real-time) Adaptive Performance Optimization (APO) is provided for a transport aircraft in steady climb, cruise, turn descent or other flight conditions based on measurements and calculations of incremental drag from a forced response maneuver of one or more redundant control effectors defined as those in excess of the minimum set of control effectors required to maintain the steady flight condition in progress. The method comprises the steps of applying excitation in a raised-cosine form over an interval of from 100 to 500 sec. at the rate of 1 to 10 sets/sec of excitation, and data for analysis is gathered in sets of measurements made during the excitation to calculate lift and drag coefficients C.sub.L and C.sub.D from two equations, one for each coefficient. A third equation is an expansion of C.sub.D as a function of parasitic drag, induced drag, Mach and altitude drag effects, and control effector drag, and assumes a quadratic variation of drag with positions .delta..sub.i of redundant control effectors i=1 to n. The third equation is then solved for .delta..sub.iopt the optimal position of redundant control effector i, which is then used to set the control effector i for optimum performance during the remainder of said steady flight or until monitored flight conditions change by some predetermined amount as determined automatically or a predetermined minimum flight time has elapsed.

  9. High-performance brain-machine interface enabled by an adaptive optimal feedback-controlled point process decoder.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy; Moorman, Helene; Gowda, Suraj; Carmena, Jose M

    2014-01-01

    Brain-machine interface (BMI) performance has been improved using Kalman filters (KF) combined with closed-loop decoder adaptation (CLDA). CLDA fits the decoder parameters during closed-loop BMI operation based on the neural activity and inferred user velocity intention. These advances have resulted in the recent ReFIT-KF and SmoothBatch-KF decoders. Here we demonstrate high-performance and robust BMI control using a novel closed-loop BMI architecture termed adaptive optimal feedback-controlled (OFC) point process filter (PPF). Adaptive OFC-PPF allows subjects to issue neural commands and receive feedback with every spike event and hence at a faster rate than the KF. Moreover, it adapts the decoder parameters with every spike event in contrast to current CLDA techniques that do so on the time-scale of minutes. Finally, unlike current methods that rotate the decoded velocity vector, adaptive OFC-PPF constructs an infinite-horizon OFC model of the brain to infer velocity intention during adaptation. Preliminary data collected in a monkey suggests that adaptive OFC-PPF improves BMI control. OFC-PPF outperformed SmoothBatch-KF in a self-paced center-out movement task with 8 targets. This improvement was due to both the PPF's increased rate of control and feedback compared with the KF, and to the OFC model suggesting that the OFC better approximates the user's strategy. Also, the spike-by-spike adaptation resulted in faster performance convergence compared to current techniques. Thus adaptive OFC-PPF enabled proficient BMI control in this monkey. PMID:25571483

  10. Lockheed L-1011 Test Station on-board in support of the Adaptive Performance Optimization flight res

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This console and its compliment of computers, monitors and commmunications equipment make up the Research Engineering Test Station, the nerve center for a new aerodynamics experiment being conducted by NASA's Dryden Flight Research Center, Edwards, California. The equipment is installed on a modified Lockheed L-1011 Tristar jetliner operated by Orbital Sciences Corp., of Dulles, Va., for Dryden's Adaptive Performance Optimization project. The experiment seeks to improve the efficiency of long-range jetliners by using small movements of the ailerons to improve the aerodynamics of the wing at cruise conditions. About a dozen research flights in the Adaptive Performance Optimization project are planned over the next two to three years. Improving the aerodynamic efficiency should result in equivalent reductions in fuel usage and costs for airlines operating large, wide-bodied jetliners.

  11. Lockheed L-1011 TriStar to support Adaptive Performance Optimization study with NASA F-18 chase plan

    NASA Technical Reports Server (NTRS)

    1995-01-01

    This Lockheed L-1011 Tristar, seen here June 1995, is currently the subject of a new flight research experiment developed by NASA's Dryden Flight Research Center, Edwards, California, to improve the effiecency of large transport aircraft. Shown with a NASA F-18 chase plane over California's Sierra Nevada mountains during an earlier baseline flight, the jetliner operated by Oribtal Sciences Corp., recently flew its first data-gathering mission in the Adaptive Performance Optimization project. The experiment seeks to reduce fuel comsumption of large jetliners by improving the aerodynamic efficiency of their wings at cruise conditions. A research computer employing a sophisticated software program adapts to changing flight conditions by commanding small movements of the L-1011's outboard ailerons to give its wings the most efficient - or optimal - airfoil. Up to a dozen research flights will be flown in the current and follow-on phases of the project over the next couple years.

  12. On the estimation algorithm used in adaptive performance optimization of turbofan engines

    NASA Technical Reports Server (NTRS)

    Espana, Martin D.; Gilyard, Glenn B.

    1993-01-01

    The performance seeking control algorithm is designed to continuously optimize the performance of propulsion systems. The performance seeking control algorithm uses a nominal model of the propulsion system and estimates, in flight, the engine deviation parameters characterizing the engine deviations with respect to nominal conditions. In practice, because of measurement biases and/or model uncertainties, the estimated engine deviation parameters may not reflect the engine's actual off-nominal condition. This factor has a necessary impact on the overall performance seeking control scheme exacerbated by the open-loop character of the algorithm. The effects produced by unknown measurement biases over the estimation algorithm are evaluated. This evaluation allows for identification of the most critical measurements for application of the performance seeking control algorithm to an F100 engine. An equivalence relation between the biases and engine deviation parameters stems from an observability study; therefore, it is undecided whether the estimated engine deviation parameters represent the actual engine deviation or whether they simply reflect the measurement biases. A new algorithm, based on the engine's (steady-state) optimization model, is proposed and tested with flight data. When compared with previous Kalman filter schemes, based on local engine dynamic models, the new algorithm is easier to design and tune and it reduces the computational burden of the onboard computer.

  13. Robust Optimal Adaptive Control Method with Large Adaptive Gain

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2009-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.

  14. On the estimation algorithm for adaptive performance optimization of turbofan engines

    NASA Technical Reports Server (NTRS)

    Espana, Martin D.

    1993-01-01

    The performance seeking control (PSC) algorithm is designed to continuously optimize the performance of propulsion systems. The PSC algorithm uses a nominal propulsion system model and estimates, in flight, the engine deviation parameters (EDPs) characterizing the engine deviations with respect to nominal conditions. In practice, because of measurement biases and/or model uncertainties, the estimated EDPs may not reflect the engine's actual off-nominal condition. This factor has a direct impact on the PSC scheme exacerbated by the open-loop character of the algorithm. In this paper, the effects produced by unknown measurement biases over the estimation algorithm are evaluated. This evaluation allows for identification of the most critical measurements for application of the PSC algorithm to an F100 engine. An equivalence relation between the biases and EDPs stems from the analysis; therefore, it is undecided whether the estimated EDPs represent the actual engine deviation or whether they simply reflect the measurement biases. A new algorithm, based on the engine's (steady-state) optimization model, is proposed and tested with flight data. When compared with previous Kalman filter schemes, based on local engine dynamic models, the new algorithm is easier to design and tune and it reduces the computational burden of the onboard computer.

  15. Adaptive control schemes for improving dynamic performance of efficiency-optimized induction motor drives.

    PubMed

    Kumar, Navneet; Raj Chelliah, Thanga; Srivastava, S P

    2015-07-01

    Model Based Control (MBC) is one of the energy optimal controllers used in vector-controlled Induction Motor (IM) for controlling the excitation of motor in accordance with torque and speed. MBC offers energy conservation especially at part-load operation, but it creates ripples in torque and speed during load transition, leading to poor dynamic performance of the drive. This study investigates the opportunity for improving dynamic performance of a three-phase IM operating with MBC and proposes three control schemes: (i) MBC with a low pass filter (ii) torque producing current (iqs) injection in the output of speed controller (iii) Variable Structure Speed Controller (VSSC). The pre and post operation of MBC during load transition is also analyzed. The dynamic performance of a 1-hp, three-phase squirrel-cage IM with mine-hoist load diagram is tested. Test results are provided for the conventional field-oriented (constant flux) control and MBC (adjustable excitation) with proposed schemes. The effectiveness of proposed schemes is also illustrated for parametric variations. The test results and subsequent analysis confer that the motor dynamics improves significantly with all three proposed schemes in terms of overshoot/undershoot peak amplitude of torque and DC link power in addition to energy saving during load transitions. PMID:25820090

  16. Adaptive critics for dynamic optimization.

    PubMed

    Kulkarni, Raghavendra V; Venayagamoorthy, Ganesh Kumar

    2010-06-01

    A novel action-dependent adaptive critic design (ACD) is developed for dynamic optimization. The proposed combination of a particle swarm optimization-based actor and a neural network critic is demonstrated through dynamic sleep scheduling of wireless sensor motes for wildlife monitoring. The objective of the sleep scheduler is to dynamically adapt the sleep duration to node's battery capacity and movement pattern of animals in its environment in order to obtain snapshots of the animal on its trajectory uniformly. Simulation results show that the sleep time of the node determined by the actor critic yields superior quality of sensory data acquisition and enhanced node longevity. PMID:20223635

  17. Adaptive Cuckoo Search Algorithm for Unconstrained Optimization

    PubMed Central

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971

  18. Adaptive cuckoo search algorithm for unconstrained optimization.

    PubMed

    Ong, Pauline

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971

  19. Does optimal recall performance in the adaptive memory paradigm require the encoding context to encourage thoughts about the environment of evolutionary adaptation?

    PubMed

    Klein, Stanley B

    2013-01-01

    This study examined whether encoding conditions that encourage thoughts about the environment of evolutionary adaptation (EEA) are necessary to produce optimal recall in the adaptive memory paradigm. Participants were asked to judge a list of words for their relevance to personal survival under two survival-based scenarios. In one condition, the EEA-relevant context was specified (i.e., you are trying to survive on the savannah/grasslands). In the other condition, no context was specified (i.e., you are simply trying to stay alive). The two tasks produced virtually identical recall despite participants in the former condition reporting significantly more EEA context-relevant thoughts (i.e., the savannah) than did participants in the latter condition (who reported virtually no EEA-related thoughts). The findings are discussed in terms of (1) survival as a target of natural selection and (2) the role of evolutionary theory in understanding memory in modern humans. PMID:22915314

  20. Adaptive hybrid optimal quantum control for imprecisely characterized systems.

    PubMed

    Egger, D J; Wilhelm, F K

    2014-06-20

    Optimal quantum control theory carries a huge promise for quantum technology. Its experimental application, however, is often hindered by imprecise knowledge of the input variables, the quantum system's parameters. We show how to overcome this by adaptive hybrid optimal control, using a protocol named Ad-HOC. This protocol combines open- and closed-loop optimal control by first performing a gradient search towards a near-optimal control pulse and then an experimental fidelity estimation with a gradient-free method. For typical settings in solid-state quantum information processing, adaptive hybrid optimal control enhances gate fidelities by an order of magnitude, making optimal control theory applicable and useful. PMID:24996074

  1. Adaptive Optics Communications Performance Analysis

    NASA Technical Reports Server (NTRS)

    Srinivasan, M.; Vilnrotter, V.; Troy, M.; Wilson, K.

    2004-01-01

    The performance improvement obtained through the use of adaptive optics for deep-space communications in the presence of atmospheric turbulence is analyzed. Using simulated focal-plane signal-intensity distributions, uncoded pulse-position modulation (PPM) bit-error probabilities are calculated assuming the use of an adaptive focal-plane detector array as well as an adaptively sized single detector. It is demonstrated that current practical adaptive optics systems can yield performance gains over an uncompensated system ranging from approximately 1 dB to 6 dB depending upon the PPM order and background radiation level.

  2. Cyclone performance and optimization

    SciTech Connect

    Leith, D.

    1990-09-15

    The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. This quarter, an empirical model for predicting pressure drop across a cyclone was developed through a statistical analysis of pressure drop data for 98 cyclone designs. The model is shown to perform better than the pressure drop models of First (1950), Alexander (1949), Barth (1956), Stairmand (1949), and Shepherd-Lapple (1940). This model is used with the efficiency model of Iozia and Leith (1990) to develop an optimization curve which predicts the minimum pressure drop and the dimension rations of the optimized cyclone for a given aerodynamic cut diameter, d{sub 50}. The effect of variation in cyclone height, cyclone diameter, and flow on the optimization curve is determined. The optimization results are used to develop a design procedure for optimized cyclones. 37 refs., 10 figs., 4 tabs.

  3. Implementing Adaptive Performance Management in Server Applications

    SciTech Connect

    Liu, Yan; Gorton, Ian

    2007-06-11

    Performance and scalability are critical quality attributes for server applications in Internet-facing business systems. These applications operate in dynamic environments with rapidly fluctuating user loads and resource levels, and unpredictable system faults. Adaptive (autonomic) systems research aims to augment such server applications with intelligent control logic that can detect and react to sudden environmental changes. However, developing this adaptive logic is complex in itself. In addition, executing the adaptive logic consumes processing resources, and hence may (paradoxically) adversely affect application performance. In this paper we describe an approach for developing high-performance adaptive server applications and the supporting technology. The Adaptive Server Framework (ASF) is built on standard middleware services, and can be used to augment legacy systems with adaptive behavior without needing to change the application business logic. Crucially, ASF provides built-in control loop components to optimize the overall application performance, which comprises both the business and adaptive logic. The control loop is based on performance models and allows systems designers to tune the performance levels simply by modifying high level declarative policies. We demonstrate the use of ASF in a case study.

  4. Adaptive Inverse optimal neuromuscular electrical stimulation.

    PubMed

    Wang, Qiang; Sharma, Nitin; Johnson, Marcus; Gregory, Chris M; Dixon, Warren E

    2013-12-01

    Neuromuscular electrical stimulation (NMES) is a prescribed treatment for various neuromuscular disorders, where an electrical stimulus is provided to elicit a muscle contraction. Barriers to the development of NMES controllers exist because the muscle response to an electrical stimulation is nonlinear and the muscle model is uncertain. Efforts in this paper focus on the development of an adaptive inverse optimal NMES controller. The controller yields desired limb trajectory tracking while simultaneously minimizing a cost functional that is positive in the error states and stimulation input. The development of this framework allows tradeoffs to be made between tracking performance and control effort by putting different penalties on error states and control input, depending on the clinical goal or functional task. The controller is examined through a Lyapunov-based analysis. Experiments on able-bodied individuals are provided to demonstrate the performance of the developed controller. PMID:23757569

  5. Russian Loanword Adaptation in Persian; Optimal Approach

    ERIC Educational Resources Information Center

    Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat

    2011-01-01

    In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…

  6. Dynamic optimization and adaptive controller design

    NASA Astrophysics Data System (ADS)

    Inamdar, S. R.

    2010-10-01

    In this work I present a new type of controller which is an adaptive tracking controller which employs dynamic optimization for optimizing current value of controller action for the temperature control of nonisothermal continuously stirred tank reactor (CSTR). We begin with a two-state model of nonisothermal CSTR which are mass and heat balance equations and then add cooling system dynamics to eliminate input multiplicity. The initial design value is obtained using local stability of steady states where approach temperature for cooling action is specified as a steady state and a design specification. Later we make a correction in the dynamics where material balance is manipulated to use feed concentration as a system parameter as an adaptive control measure in order to avoid actuator saturation for the main control loop. The analysis leading to design of dynamic optimization based parameter adaptive controller is presented. The important component of this mathematical framework is reference trajectory generation to form an adaptive control measure.

  7. Adaptation and optimization of biological transport networks.

    PubMed

    Hu, Dan; Cai, David

    2013-09-27

    It has been hypothesized that topological structures of biological transport networks are consequences of energy optimization. Motivated by experimental observation, we propose that adaptation dynamics may underlie this optimization. In contrast to the global nature of optimization, our adaptation dynamics responds only to local information and can naturally incorporate fluctuations in flow distributions. The adaptation dynamics minimizes the global energy consumption to produce optimal networks, which may possess hierarchical loop structures in the presence of strong fluctuations in flow distribution. We further show that there may exist a new phase transition as there is a critical open probability of sinks, above which there are only trees for network structures whereas below which loops begin to emerge. PMID:24116821

  8. Cyclone performance and optimization

    SciTech Connect

    Leith, D.

    1990-06-15

    The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. During the past quarter, we have nearly completed modeling work that employs the flow field measurements made during the past six months. In addition, we have begun final work using the results of this project to develop improved design methods for cyclones. This work involves optimization using the Iozia-Leith efficiency model and the Dirgo pressure drop model. This work will be completed this summer. 9 figs.

  9. Cyclone performance and optimization

    SciTech Connect

    Leith, D.

    1989-06-15

    The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. We have now received all the equipment necessary for the flow visualization studies described over the last two progress reports. We have begun more detailed studies of the gas flow pattern within cyclones as detailed below. Third, we have begun studies of the effect of particle concentration on cyclone performance. This work is critical to application of our results to commercial operations. 1 fig.

  10. Optimized modal tomography in adaptive optics

    NASA Astrophysics Data System (ADS)

    Tokovinin, A.; Le Louarn, M.; Viard, E.; Hubin, N.; Conan, R.

    2001-11-01

    The performance of modal Multi-Conjugate Adaptive Optics systems correcting a finite number of Zernike modes is studied using a second-order statistical analysis. Both natural and laser guide stars (GS) are considered. An optimized command matrix is computed from the covariances of atmospheric signals and noise, to minimize the residual phase variance averaged over the field of view. An efficient way to calculate atmospheric covariances of Zernike modes and their projections is found. The modal covariance code is shown to reproduce the known results on anisoplanatism and the cone effect with single GS. It is then used to study the error of wave-front estimation from several off-axis GSs (tomography). With increasing radius of the GS constellation Theta , the tomographic error increases quadratically at small Theta , then linearly at larger Theta when incomplete overlap of GS beams in the upper atmospheric layers provides the major contribution to this error, especially on low-order modes. It is demonstrated that the quality of turbulence correction with two deformable mirrors is practically independent of the conjugation altitude of the second mirror, as long as the command matrix is optimized for each configuration.

  11. Optimized micromirror arrays for adaptive optics

    NASA Astrophysics Data System (ADS)

    Michalicek, M. Adrian; Comtois, John H.; Hetherington, Dale L.

    1999-01-01

    This paper describes the design, layout, fabrication, and surface characterization of highly optimized surface micromachined micromirror devices. Design considerations and fabrication capabilities are presented. These devices are fabricated in the state-of-the-art, four-level, planarized, ultra-low-stress polysilicon process available at Sandia National Laboratories known as the Sandia Ultra-planar Multi-level MEMS Technology (SUMMiT). This enabling process permits the development of micromirror devices with near-ideal characteristics that have previously been unrealizable in standard three-layer polysilicon processes. The reduced 1 μm minimum feature sizes and 0.1 μm mask resolution make it possible to produce dense wiring patterns and irregularly shaped flexures. Likewise, mirror surfaces can be uniquely distributed and segmented in advanced patterns and often irregular shapes in order to minimize wavefront error across the pupil. The ultra-low-stress polysilicon and planarized upper layer allow designers to make larger and more complex micromirrors of varying shape and surface area within an array while maintaining uniform performance of optical surfaces. Powerful layout functions of the AutoCAD editor simplify the design of advanced micromirror arrays and make it possible to optimize devices according to the capabilities of the fabrication process. Micromirrors fabricated in this process have demonstrated a surface variance across the array from only 2-3 nm to a worst case of roughly 25 nm while boasting active surface areas of 98% or better. Combining the process planarization with a ``planarized-by-design'' approach will produce micromirror array surfaces that are limited in flatness only by the surface deposition roughness of the structural material. Ultimately, the combination of advanced process and layout capabilities have permitted the fabrication of highly optimized micromirror arrays for adaptive optics.

  12. Adaptive stimulus optimization for sensory systems neuroscience

    PubMed Central

    DiMattina, Christopher; Zhang, Kechen

    2013-01-01

    In this paper, we review several lines of recent work aimed at developing practical methods for adaptive on-line stimulus generation for sensory neurophysiology. We consider various experimental paradigms where on-line stimulus optimization is utilized, including the classical optimal stimulus paradigm where the goal of experiments is to identify a stimulus which maximizes neural responses, the iso-response paradigm which finds sets of stimuli giving rise to constant responses, and the system identification paradigm where the experimental goal is to estimate and possibly compare sensory processing models. We discuss various theoretical and practical aspects of adaptive firing rate optimization, including optimization with stimulus space constraints, firing rate adaptation, and possible network constraints on the optimal stimulus. We consider the problem of system identification, and show how accurate estimation of non-linear models can be highly dependent on the stimulus set used to probe the network. We suggest that optimizing stimuli for accurate model estimation may make it possible to successfully identify non-linear models which are otherwise intractable, and summarize several recent studies of this type. Finally, we present a two-stage stimulus design procedure which combines the dual goals of model estimation and model comparison and may be especially useful for system identification experiments where the appropriate model is unknown beforehand. We propose that fast, on-line stimulus optimization enabled by increasing computer power can make it practical to move sensory neuroscience away from a descriptive paradigm and toward a new paradigm of real-time model estimation and comparison. PMID:23761737

  13. Cyclone performance and optimization

    SciTech Connect

    Leith, D.

    1989-03-15

    The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. This quarter, we have been hampered somewhat by flow delivery of the bubble generation system and arc lighting system placed on order last fall. This equipment is necessary to map the flow field within cyclones using the techniques described in last quarter's report. Using the bubble generator, we completed this quarter a study of the natural length'' of cyclones of 18 different configurations, each configuration operated at five different gas flows. Results suggest that the equation by Alexander for natural length is incorrect; natural length as measured with the bubble generation system is always below the bottom of the cyclones regardless of the cyclone configuration or gas flow, within the limits of the experimental cyclones tested. This finding is important because natural length is a term in equations used to predict cyclone efficiency. 1 tab.

  14. Cyclone performance and optimization

    SciTech Connect

    Leith, D.

    1990-12-15

    An empirical model for predicting pressure drop across a cyclone, developed by Dirgo (1988), is presented. The model was developed through a statistical analysis of pressure drop data for 98 cyclone designs. This model is used with the efficiency model of Iozia and Leith (1990) to develop an optimization curve which predicts the minimum pressure drop on the dimension ratios of the optimized cyclone for a given aerodynamic cut diameter, d{sub 50}. The effect of variation in cyclone height, cyclone diameter, and flow on the optimization is determined. The optimization results are used to develop a design procedure for optimized cyclones. 33 refs., 10 figs., 4 tabs.

  15. Adaptive contrast imaging: transmit frequency optimization

    NASA Astrophysics Data System (ADS)

    Ménigot, Sébastien; Novell, Anthony; Voicu, Iulian; Bouakaz, Ayache; Girault, Jean-Marc

    2010-01-01

    Introduction: Since the introduction of ultrasound (US) contrast imaging, the imaging systems use a fixed emitting frequency. However it is known that the insonified medium is time-varying and therefore an adapted time-varying excitation is expected. We suggest an adaptive imaging technique which selects the optimal transmit frequency that maximizes the acoustic contrast. Two algorithms have been proposed to find an US excitation for which the frequency was optimal with microbubbles. Methods and Materials: Simulations were carried out for encapsulated microbubbles of 2 microns by considering the modified Rayleigh-Plesset equation for 2 MHz transmit frequency and for various pressure levels (20 kPa up to 420kPa). In vitro experiments were carried out using a transducer operating at 2 MHz and using a programmable waveform generator. Contrast agent was then injected into a small container filled with water. Results and discussions: We show through simulations and in vitro experiments that our adaptive imaging technique gives: 1) in case of simulations, a gain of acoustic contrast which can reach 9 dB compared to the traditional technique without optimization and 2) for in vitro experiments, a gain which can reach 18 dB. There is a non negligible discrepancy between simulations and experiments. These differences are certainly due to the fact that our simulations do not take into account the diffraction and nonlinear propagation effects. Further optimizations are underway.

  16. Optimal Bayesian adaptive trials when treatment efficacy depends on biomarkers.

    PubMed

    Zhang, Yifan; Trippa, Lorenzo; Parmigiani, Giovanni

    2016-06-01

    Clinical biomarkers play an important role in precision medicine and are now extensively used in clinical trials, particularly in cancer. A response adaptive trial design enables researchers to use treatment results about earlier patients to aid in treatment decisions of later patients. Optimal adaptive trial designs have been developed without consideration of biomarkers. In this article, we describe the mathematical steps for computing optimal biomarker-integrated adaptive trial designs. These designs maximize the expected trial utility given any pre-specified utility function, though we focus here on maximizing patient responses within a given patient horizon. We describe the performance of the optimal design in different scenarios. We compare it to Bayesian Adaptive Randomization (BAR), which is emerging as a practical approach to develop adaptive trials. The difference in expected utility between BAR and optimal designs is smallest when the biomarker subgroups are highly imbalanced. We also compare BAR, a frequentist play-the-winner rule with integrated biomarkers and a marker-stratified balanced randomization design (BR). We show that, in contrasting two treatments, BR achieves a nearly optimal expected utility when the patient horizon is relatively large. Our work provides novel theoretical solution, as well as an absolute benchmark for the evaluation of trial designs in personalized medicine. PMID:26575199

  17. Neurophysiology of performance monitoring and adaptive behavior.

    PubMed

    Ullsperger, Markus; Danielmeier, Claudia; Jocham, Gerhard

    2014-01-01

    Successful goal-directed behavior requires not only correct action selection, planning, and execution but also the ability to flexibly adapt behavior when performance problems occur or the environment changes. A prerequisite for determining the necessity, type, and magnitude of adjustments is to continuously monitor the course and outcome of one's actions. Feedback-control loops correcting deviations from intended states constitute a basic functional principle of adaptation at all levels of the nervous system. Here, we review the neurophysiology of evaluating action course and outcome with respect to their valence, i.e., reward and punishment, and initiating short- and long-term adaptations, learning, and decisions. Based on studies in humans and other mammals, we outline the physiological principles of performance monitoring and subsequent cognitive, motivational, autonomic, and behavioral adaptation and link them to the underlying neuroanatomy, neurochemistry, psychological theories, and computational models. We provide an overview of invasive and noninvasive systemic measures, such as electrophysiological, neuroimaging, and lesion data. We describe how a wide network of brain areas encompassing frontal cortices, basal ganglia, thalamus, and monoaminergic brain stem nuclei detects and evaluates deviations of actual from predicted states indicating changed action costs or outcomes. This information is used to learn and update stimulus and action values, guide action selection, and recruit adaptive mechanisms that compensate errors and optimize goal achievement. PMID:24382883

  18. Development of a digital adaptive optimal linear regulator flight controller

    NASA Technical Reports Server (NTRS)

    Berry, P.; Kaufman, H.

    1975-01-01

    Digital adaptive controllers have been proposed as a means for retaining uniform handling qualities over the flight envelope of a high-performance aircraft. Towards such an implementation, an explicit adaptive controller, which makes direct use of online parameter identification, has been developed and applied to the linearized lateral equations of motion for a typical fighter aircraft. The system is composed of an online weighted least-squares parameter identifier, a Kalman state filter, and a model following control law designed using optimal linear regulator theory. Simulation experiments with realistic measurement noise indicate that the proposed adaptive system has the potential for onboard implementation.

  19. Shape optimization including finite element grid adaptation

    NASA Technical Reports Server (NTRS)

    Kikuchi, N.; Taylor, J. E.

    1984-01-01

    The prediction of optimal shape design for structures depends on having a sufficient level of precision in the computation of structural response. These requirements become critical in situations where the region to be designed includes stress concentrations or unilateral contact surfaces, for example. In the approach to shape optimization discussed here, a means to obtain grid adaptation is incorporated into the finite element procedures. This facility makes it possible to maintain a level of quality in the computational estimate of response that is surely adequate for the shape design problem.

  20. Turbine Performance Optimization Task Status

    NASA Technical Reports Server (NTRS)

    Griffin, Lisa W.; Turner, James E. (Technical Monitor)

    2001-01-01

    Capability to optimize for turbine performance and accurately predict unsteady loads will allow for increased reliability, Isp, and thrust-to-weight. The development of a fast, accurate aerodynamic design, analysis, and optimization system is required.

  1. An Adaptive Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-11-03

    In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.

  2. Direct aperture optimization for online adaptive radiation therapy

    SciTech Connect

    Mestrovic, Ante; Milette, Marie-Pierre; Nichol, Alan; Clark, Brenda G.; Otto, Karl

    2007-05-15

    This paper is the first investigation of using direct aperture optimization (DAO) for online adaptive radiation therapy (ART). A geometrical model representing the anatomy of a typical prostate case was created. To simulate interfractional deformations, four different anatomical deformations were created by systematically deforming the original anatomy by various amounts (0.25, 0.50, 0.75, and 1.00 cm). We describe a series of techniques where the original treatment plan was adapted in order to correct for the deterioration of dose distribution quality caused by the anatomical deformations. We found that the average time needed to adapt the original plan to arrive at a clinically acceptable plan is roughly half of the time needed for a complete plan regeneration, for all four anatomical deformations. Furthermore, through modification of the DAO algorithm the optimization search space was reduced and the plan adaptation was significantly accelerated. For the first anatomical deformation (0.25 cm), the plan adaptation was six times more efficient than the complete plan regeneration. For the 0.50 and 0.75 cm deformations, the optimization efficiency was increased by a factor of roughly 3 compared to the complete plan regeneration. However, for the anatomical deformation of 1.00 cm, the reduction of the optimization search space during plan adaptation did not result in any efficiency improvement over the original (nonmodified) plan adaptation. The anatomical deformation of 1.00 cm demonstrates the limit of this approach. We propose an innovative approach to online ART in which the plan adaptation and radiation delivery are merged together and performed concurrently--adaptive radiation delivery (ARD). A fundamental advantage of ARD is the fact that radiation delivery can start almost immediately after image acquisition and evaluation. Most of the original plan adaptation is done during the radiation delivery, so the time spent adapting the original plan does not

  3. Structured near-optimal channel-adapted quantum error correction

    NASA Astrophysics Data System (ADS)

    Fletcher, Andrew S.; Shor, Peter W.; Win, Moe Z.

    2008-01-01

    We present a class of numerical algorithms which adapt a quantum error correction scheme to a channel model. Given an encoding and a channel model, it was previously shown that the quantum operation that maximizes the average entanglement fidelity may be calculated by a semidefinite program (SDP), which is a convex optimization. While optimal, this recovery operation is computationally difficult for long codes. Furthermore, the optimal recovery operation has no structure beyond the completely positive trace-preserving constraint. We derive methods to generate structured channel-adapted error recovery operations. Specifically, each recovery operation begins with a projective error syndrome measurement. The algorithms to compute the structured recovery operations are more scalable than the SDP and yield recovery operations with an intuitive physical form. Using Lagrange duality, we derive performance bounds to certify near-optimality.

  4. Adaptive Wing Camber Optimization: A Periodic Perturbation Approach

    NASA Technical Reports Server (NTRS)

    Espana, Martin; Gilyard, Glenn

    1994-01-01

    Available redundancy among aircraft control surfaces allows for effective wing camber modifications. As shown in the past, this fact can be used to improve aircraft performance. To date, however, algorithm developments for in-flight camber optimization have been limited. This paper presents a perturbational approach for cruise optimization through in-flight camber adaptation. The method uses, as a performance index, an indirect measurement of the instantaneous net thrust. As such, the actual performance improvement comes from the integrated effects of airframe and engine. The algorithm, whose design and robustness properties are discussed, is demonstrated on the NASA Dryden B-720 flight simulator.

  5. MPQC: Performance Analysis and Optimization

    SciTech Connect

    Sarje, Abhinav; Williams, Samuel; Bailey, David

    2012-11-30

    MPQC (Massively Parallel Quantum Chemistry) is a widely used computational quantum chemistry code. It is capable of performing a number of computations commonly occurring in quantum chemistry. In order to achieve better performance of MPQC, in this report we present a detailed performance analysis of this code. We then perform loop and memory access optimizations, and measure performance improvements by comparing the performance of the optimized code with that of the original MPQC code. We observe that the optimized MPQC code achieves a significant improvement in the performance through a better utilization of vector processing and memory hierarchies.

  6. Unit Commitment by Adaptive Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Saber, Ahmed Yousuf; Senjyu, Tomonobu; Miyagi, Tsukasa; Urasaki, Naomitsu; Funabashi, Toshihisa

    This paper presents an Adaptive Particle Swarm Optimization (APSO) for Unit Commitment (UC) problem. APSO reliably and accurately tracks a continuously changing solution. By analyzing the social model of standard PSO for the UC problem of variable size and load demand, adaptive criteria are applied on PSO parameters and the global best particle (knowledge) based on the diversity of fitness. In this proposed method, PSO parameters are automatically adjusted using Gaussian modification. To increase the knowledge, the global best particle is updated instead of a fixed one in each generation. To avoid the method to be frozen, idle particles are reset. The real velocity is digitized (0/1) by a logistic function for binary UC. Finally, the benchmark data and methods are used to show the effectiveness of the proposed method.

  7. Optimal Control Modification for Robust Adaptation of Singularly Perturbed Systems with Slow Actuators

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham; Stepanyan, Vahram; Boskovic, Jovan

    2009-01-01

    Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. The model matching conditions in the transformed time coordinate results in increase in the feedback gain and modification of the adaptive law.

  8. Optimal Control Modification Adaptive Law for Time-Scale Separated Systems

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2010-01-01

    Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. A model matching conditions in the transformed time coordinate results in an increase in the actuator command that effectively compensate for the slow actuator dynamics. Simulations demonstrate effectiveness of the method.

  9. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    PubMed Central

    Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  10. Support vector machine based on adaptive acceleration particle swarm optimization.

    PubMed

    Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  11. Optimal Pid Tuning for Power System Stabilizers Using Adaptive Particle Swarm Optimization Technique

    NASA Astrophysics Data System (ADS)

    Oonsivilai, Anant; Marungsri, Boonruang

    2008-10-01

    An application of the intelligent search technique to find optimal parameters of power system stabilizer (PSS) considering proportional-integral-derivative controller (PID) for a single-machine infinite-bus system is presented. Also, an efficient intelligent search technique, adaptive particle swarm optimization (APSO), is engaged to express usefulness of the intelligent search techniques in tuning of the PID—PSS parameters. Improve damping frequency of system is optimized by minimizing an objective function with adaptive particle swarm optimization. At the same operating point, the PID—PSS parameters are also tuned by the Ziegler-Nichols method. The performance of proposed controller compared to the conventional Ziegler-Nichols PID tuning controller. The results reveal superior effectiveness of the proposed APSO based PID controller.

  12. A Tutorial on Adaptive Design Optimization

    PubMed Central

    Myung, Jay I.; Cavagnaro, Daniel R.; Pitt, Mark A.

    2013-01-01

    Experimentation is ubiquitous in the field of psychology and fundamental to the advancement of its science, and one of the biggest challenges for researchers is designing experiments that can conclusively discriminate the theoretical hypotheses or models under investigation. The recognition of this challenge has led to the development of sophisticated statistical methods that aid in the design of experiments and that are within the reach of everyday experimental scientists. This tutorial paper introduces the reader to an implementable experimentation methodology, dubbed Adaptive Design Optimization, that can help scientists to conduct “smart” experiments that are maximally informative and highly efficient, which in turn should accelerate scientific discovery in psychology and beyond. PMID:23997275

  13. Adaptive Mallow's optimization for weighted median filters

    NASA Astrophysics Data System (ADS)

    Rachuri, Raghu; Rao, Sathyanarayana S.

    2002-05-01

    This work extends the idea of spectral optimization for the design of Weighted Median filters and employ adaptive filtering that updates the coefficients of the FIR filter from which the weights of the median filters are derived. Mallows' theory of non-linear smoothers [1] has proven to be of great theoretical significance providing simple design guidelines for non-linear smoothers. It allows us to find a set of positive weights for a WM filter whose sample selection probabilities (SSP's) are as close as possible to a SSP set predetermined by Mallow's. Sample selection probabilities have been used as a basis for designing stack smoothers as they give a measure of the filter's detail preserving ability and give non-negative filter weights. We will extend this idea to design weighted median filters admitting negative weights. The new method first finds the linear FIR filter coefficients adaptively, which are then used to determine the weights of the median filter. WM filters can be designed to have band-pass, high-pass as well as low-pass frequency characteristics. Unlike the linear filters, however, the weighted median filters are robust in the presence of impulsive noise, as shown by the simulation results.

  14. Optimizing rotary drill performance

    SciTech Connect

    Schivley, G.P. Jr.

    1995-12-31

    Data is presented showing Penetration Rate (PR) versus Force-on-the-Bit (FB) and Bit Angular Speed (N). Using this data, it is shown how FB and N each uniquely contribute to the PR for any particular drilling situation. This data represents many mining situations; including coal, copper, gold, iron ore and limestone quarrying. The important relationship between Penetration per Revolution (P/R) and the height of the cutting elements of the bit (CH) is discussed. Drill performance is then reviewed, considering the effect of FB and N on bit life. All this leads to recommendations for the operating values of FB and N for drilling situations where the rock is not highly abrasive and bit replacements are because of catastrophic failure of the bit cone bearings. The contribution of compressed air to the drilling process is discussed. It is suggested that if the air issuing from the bit jets is supersonic that may enhance the sweeping of the hole bottom. Also, it is shown that not just uphole air velocity is enough to provide adequate transport of the rock cuttings up the annulus of a drilled hole. In addition, air volume flow rate must be considered to assure there is adequate particle spacing so the mechanism of aerodynamic drag can effectively lift the cuttings up and out of the hole annulus.

  15. Performance Optimization and Auto-Tuning

    SciTech Connect

    Howison, Mark

    2012-10-01

    In the broader computational research community, one subject of recent research is the problem of adapting algorithms to make effective use of multi- and many-core processors. Effective use of these architectures, which have complex memory hierarchies with many layers of cache, typically involves a careful examination of how an algorithm moves data through the memory hierarchy. Unfortunately, there is often a non-obvious relationship between algorithmic parameters like blocking strategies, and their impact on memory utilization, and, in turn, the relationship with runtime performance. Auto-tuning is an empirical method used to discover optimal values for tunable algorithmic parameters under such circumstances. The challenge is compounded by the fact that the settings that produce the best performance for a given problem and a given platform may not be the best for a different problem on the same platform, or the same problem on a different platform. The high performance visualization research community has begun to explore and adapt the principles of auto-tuning for the purpose of optimizing codes on modern multi- and many-core processors. This report focuses on how performance optimization studies reveal a dramatic variation in performance for two fundamental visualization algorithms: one based on a stencil operation having structured, uniform memory access, and the other is ray casting volume rendering, which uses unstructured memory access patterns. The two case studies highlighted in this report show the extra effort required to optimize such codes by adjusting the tunable algorithmic parameters can return substantial gains in performance. Additionally, these case studies also explore the potential impact of and the interaction between algorithmic optimizations and tunable algorithmic parameters, along with the potential performance gains resulting from leveraging architecture-specific features.

  16. Adaptive neuro-fuzzy estimation of optimal lens system parameters

    NASA Astrophysics Data System (ADS)

    Petković, Dalibor; Pavlović, Nenad T.; Shamshirband, Shahaboddin; Mat Kiah, Miss Laiha; Badrul Anuar, Nor; Idna Idris, Mohd Yamani

    2014-04-01

    Due to the popularization of digital technology, the demand for high-quality digital products has become critical. The quantitative assessment of image quality is an important consideration in any type of imaging system. Therefore, developing a design that combines the requirements of good image quality is desirable. Lens system design represents a crucial factor for good image quality. Optimization procedure is the main part of the lens system design methodology. Lens system optimization is a complex non-linear optimization task, often with intricate physical constraints, for which there is no analytical solutions. Therefore lens system design provides ideal problems for intelligent optimization algorithms. There are many tools which can be used to measure optical performance. One very useful tool is the spot diagram. The spot diagram gives an indication of the image of a point object. In this paper, one optimization criterion for lens system, the spot size radius, is considered. This paper presents new lens optimization methods based on adaptive neuro-fuzzy inference strategy (ANFIS). This intelligent estimator is implemented using Matlab/Simulink and the performances are investigated.

  17. Expected treatment dose construction and adaptive inverse planning optimization: Implementation for offline head and neck cancer adaptive radiotherapy

    SciTech Connect

    Yan Di; Liang Jian

    2013-02-15

    Purpose: To construct expected treatment dose for adaptive inverse planning optimization, and evaluate it on head and neck (h and n) cancer adaptive treatment modification. Methods: Adaptive inverse planning engine was developed and integrated in our in-house adaptive treatment control system. The adaptive inverse planning engine includes an expected treatment dose constructed using the daily cone beam (CB) CT images in its objective and constrains. Feasibility of the adaptive inverse planning optimization was evaluated retrospectively using daily CBCT images obtained from the image guided IMRT treatment of 19 h and n cancer patients. Adaptive treatment modification strategies with respect to the time and the number of adaptive inverse planning optimization during the treatment course were evaluated using the cumulative treatment dose in organs of interest constructed using all daily CBCT images. Results: Expected treatment dose was constructed to include both the delivered dose, to date, and the estimated dose for the remaining treatment during the adaptive treatment course. It was used in treatment evaluation, as well as in constructing the objective and constraints for adaptive inverse planning optimization. The optimization engine is feasible to perform planning optimization based on preassigned treatment modification schedule. Compared to the conventional IMRT, the adaptive treatment for h and n cancer illustrated clear dose-volume improvement for all critical normal organs. The dose-volume reductions of right and left parotid glands, spine cord, brain stem and mandible were (17 {+-} 6)%, (14 {+-} 6)%, (11 {+-} 6)%, (12 {+-} 8)%, and (5 {+-} 3)% respectively with the single adaptive modification performed after the second treatment week; (24 {+-} 6)%, (22 {+-} 8)%, (21 {+-} 5)%, (19 {+-} 8)%, and (10 {+-} 6)% with three weekly modifications; and (28 {+-} 5)%, (25 {+-} 9)%, (26 {+-} 5)%, (24 {+-} 8)%, and (15 {+-} 9)% with five weekly modifications. Conclusions

  18. Adaptive Flight Control Design with Optimal Control Modification on an F-18 Aircraft Model

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Nguyen, Nhan T.; Griffin, Brian J.

    2010-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to as the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly; however, a large adaptive gain can lead to high-frequency oscillations which can adversely affect the robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient robustness. A damping term (v) is added in the modification to increase damping as needed. Simulations were conducted on a damaged F-18 aircraft (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) with both the standard baseline dynamic inversion controller and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model.

  19. Adaptive optimization for pilot-tone aided phase noise compensation

    NASA Astrophysics Data System (ADS)

    Cui, Sheng; Xu, Mengran; Xia, Wenjuan; Ke, Chanjian; Xia, Zijie; Liu, Deming

    2015-11-01

    Pilot-tone (PT) aided phase noise compensation algorithm is very simple and effective, especially for flexible optical networks, because the phase noise coming from both Tx/Rx lasers and nonlinear cross phase modulation (XPM) during transmission can be adaptively compensated without high computational cost nonlinear operations, or the information of the neighboring channels and the optical link configuration. But to achieve the best performance the two key parameters, i.e. the pilot to signal power ratio and pilot bandpass filter bandwidth need to be optimized. In this paper it is demonstrated that constellation information can be used to adjust the two parameters adaptively to achieve the minimum BER in both homogenous and hybrid single carrier transmission systems with different LPN, XPM and amplified spontaneous emission (ASE) noise distortions.

  20. Adaptive, predictive controller for optimal process control

    SciTech Connect

    Brown, S.K.; Baum, C.C.; Bowling, P.S.; Buescher, K.L.; Hanagandi, V.M.; Hinde, R.F. Jr.; Jones, R.D.; Parkinson, W.J.

    1995-12-01

    One can derive a model for use in a Model Predictive Controller (MPC) from first principles or from experimental data. Until recently, both methods failed for all but the simplest processes. First principles are almost always incomplete and fitting to experimental data fails for dimensions greater than one as well as for non-linear cases. Several authors have suggested the use of a neural network to fit the experimental data to a multi-dimensional and/or non-linear model. Most networks, however, use simple sigmoid functions and backpropagation for fitting. Training of these networks generally requires large amounts of data and, consequently, very long training times. In 1993 we reported on the tuning and optimization of a negative ion source using a special neural network[2]. One of the properties of this network (CNLSnet), a modified radial basis function network, is that it is able to fit data with few basis functions. Another is that its training is linear resulting in guaranteed convergence and rapid training. We found the training to be rapid enough to support real-time control. This work has been extended to incorporate this network into an MPC using the model built by the network for predictive control. This controller has shown some remarkable capabilities in such non-linear applications as continuous stirred exothermic tank reactors and high-purity fractional distillation columns[3]. The controller is able not only to build an appropriate model from operating data but also to thin the network continuously so that the model adapts to changing plant conditions. The controller is discussed as well as its possible use in various of the difficult control problems that face this community.

  1. A forward method for optimal stochastic nonlinear and adaptive control

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    1988-01-01

    A computational approach is taken to solve the optimal nonlinear stochastic control problem. The approach is to systematically solve the stochastic dynamic programming equations forward in time, using a nested stochastic approximation technique. Although computationally intensive, this provides a straightforward numerical solution for this class of problems and provides an alternative to the usual dimensionality problem associated with solving the dynamic programming equations backward in time. It is shown that the cost degrades monotonically as the complexity of the algorithm is reduced. This provides a strategy for suboptimal control with clear performance/computation tradeoffs. A numerical study focusing on a generic optimal stochastic adaptive control example is included to demonstrate the feasibility of the method.

  2. Chromatic adaptation performance of different RGB sensors

    NASA Astrophysics Data System (ADS)

    Susstrunk, Sabine E.; Holm, Jack M.; Finlayson, Graham D.

    2000-12-01

    Chromatic adaptation transforms are used in imaging system to map image appearance to colorimetry under different illumination sources. In this paper, the performance of different chromatic adaptation transforms (CAT) is compared with the performance of transforms based on RGB primaries that have been investigated in relation to standard color spaces for digital still camera characterization and image interchange. The chromatic adaptation transforms studied are von Kries, Bradford, Sharp, and CMCCAT2000. The RGB primaries investigated are ROMM, ITU-R BT.709, and 'prime wavelength' RGB. The chromatic adaptation model used is a von Kries model that linearly scales post-adaptation cone response with illuminant dependent coefficients. The transforms were evaluated using 16 sets of corresponding color dat. The actual and predicted tristimulus values were converted to CIELAB, and three different error prediction metrics, (Delta) ELab, (Delta) ECIE94, and (Delta) ECMC(1:1) were applied to the results. One-tail Student-t tests for matched pairs were calculated to compare if the variations in errors are statistically significant. For the given corresponding color data sets, the traditional chromatic adaptation transforms, Sharp CAT and CMCCAT2000, performed best. However, some transforms based on RGB primaries also exhibit good chromatic adaptation behavior, leading to the conclusion that white-point independent RGB spaces for image encoding can be defined. This conclusion holds only if the linear von Kries model is considered adequate to predict chromatic adaptation behavior.

  3. Cockpit Adaptive Automation and Pilot Performance

    NASA Technical Reports Server (NTRS)

    Parasuraman, Raja

    2001-01-01

    The introduction of high-level automated systems in the aircraft cockpit has provided several benefits, e.g., new capabilities, enhanced operational efficiency, and reduced crew workload. At the same time, conventional 'static' automation has sometimes degraded human operator monitoring performance, increased workload, and reduced situation awareness. Adaptive automation represents an alternative to static automation. In this approach, task allocation between human operators and computer systems is flexible and context-dependent rather than static. Adaptive automation, or adaptive task allocation, is thought to provide for regulation of operator workload and performance, while preserving the benefits of static automation. In previous research we have reported beneficial effects of adaptive automation on the performance of both pilots and non-pilots of flight-related tasks. For adaptive systems to be viable, however, such benefits need to be examined jointly in the context of a single set of tasks. The studies carried out under this project evaluated a systematic method for combining different forms of adaptive automation. A model for effective combination of different forms of adaptive automation, based on matching adaptation to operator workload was proposed and tested. The model was evaluated in studies using IFR-rated pilots flying a general-aviation simulator. Performance, subjective, and physiological (heart rate variability, eye scan-paths) measures of workload were recorded. The studies compared workload-based adaptation to to non-adaptive control conditions and found evidence for systematic benefits of adaptive automation. The research provides an empirical basis for evaluating the effectiveness of adaptive automation in the cockpit. The results contribute to the development of design principles and guidelines for the implementation of adaptive automation in the cockpit, particularly in general aviation, and in other human-machine systems. Project goals

  4. Rewarding imperfect motor performance reduces adaptive changes.

    PubMed

    van der Kooij, K; Overvliet, K E

    2016-06-01

    Could a pat on the back affect motor adaptation? Recent studies indeed suggest that rewards can boost motor adaptation. However, the rewards used were typically reward gradients that carried quite detailed information about performance. We investigated whether simple binary rewards affected how participants learned to correct for a visual rotation of performance feedback in a 3D pointing task. To do so, we asked participants to align their unseen hand with virtual target cubes in alternating blocks with and without spatial performance feedback. Forty participants were assigned to one of two groups: a 'spatial only' group, in which the feedback consisted of showing the (perturbed) endpoint of the hand, or to a 'spatial & reward' group, in which a reward could be received in addition to the spatial feedback. In addition, six participants were tested in a 'reward only' group. Binary reward was given when the participants' hand landed in a virtual 'hit area' that was adapted to individual performance to reward about half the trials. The results show a typical pattern of adaptation in both the 'spatial only' and the 'spatial & reward' groups, whereas the 'reward only' group was unable to adapt. The rewards did not affect the overall pattern of adaptation in the 'spatial & reward' group. However, on a trial-by-trial basis, the rewards reduced adaptive changes to spatial errors. PMID:26758721

  5. Optimizing Reservoir Operation to Adapt to the Climate Change

    NASA Astrophysics Data System (ADS)

    Madadgar, S.; Jung, I.; Moradkhani, H.

    2010-12-01

    Climate change and upcoming variation in flood timing necessitates the adaptation of current rule curves developed for operation of water reservoirs as to reduce the potential damage from either flood or draught events. This study attempts to optimize the current rule curves of Cougar Dam on McKenzie River in Oregon addressing some possible climate conditions in 21th century. The objective is to minimize the failure of operation to meet either designated demands or flood limit at a downstream checkpoint. A simulation/optimization model including the standard operation policy and a global optimization method, tunes the current rule curve upon 8 GCMs and 2 greenhouse gases emission scenarios. The Precipitation Runoff Modeling System (PRMS) is used as the hydrology model to project the streamflow for the period of 2000-2100 using downscaled precipitation and temperature forcing from 8 GCMs and two emission scenarios. An ensemble of rule curves, each associated with an individual scenario, is obtained by optimizing the reservoir operation. The simulation of reservoir operation, for all the scenarios and the expected value of the ensemble, is conducted and performance assessment using statistical indices including reliability, resilience, vulnerability and sustainability is made.

  6. Optimal mirror deformation for multi conjugate adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Raffetseder, S.; Ramlau, R.; Yudytskiy, M.

    2016-02-01

    Multi conjugate adaptive optics (MCAO) is a system planned for all future extremely large telescopes to compensate in real-time for the optical distortions caused by atmospheric turbulence over a wide field of view. The principles of MCAO are based on two inverse problems: a stable tomographic reconstruction of the turbulence profile followed by the optimal alignment of multiple deformable mirrors (DMs), conjugated to different altitudes in the atmosphere. We present a novel method to treat the optimal mirror deformation problem for MCAO. Contrary to the standard approach where the problem is formulated over a discrete set of optimization directions we focus on the solution of the continuous optimization problem. In the paper we study the existence and uniqueness of the solution and present a Tikhonov based regularization method. This approach gives us the flexibility to apply quadrature rules for a more sophisticated discretization scheme. Using numerical simulations in the context of the European extremely large telescope we show that our method leads to a significant improvement in the reconstruction quality over the standard approach and allows to reduce the numerical burden on the computer performing the computations.

  7. To optimize performance, begin at the pulverizers

    SciTech Connect

    Storm, R.F.; Storm, S.K.

    2007-02-15

    A systematic, performance driven maintenance program for optimizing combustion can achieve great results. The challenge for O & M staff is deciding which proven strategy and tactics for reducing NOx and improving plant reliability to adapt and implement. The structured approach presented here has proven its worth at several plants that have wrestled with such problems. Based on experience gained by Storm Technologies, the article explores opportunities for raising efficiency of pulverized coal fired boilers by improving the performance of its pulverizers. In summary, significant ways to optimise performance are: increasing the fineness of coal particles to enhance release of fuel-bound nitrogen and to improve fuel balance, and reducing the total airflow and excess air to reduce thermal NOx production. 6 figs., 2 tabs.

  8. Trajectory Optimization with Adaptive Deployable Entry and Placement Technology Architecture

    NASA Astrophysics Data System (ADS)

    Saranathan, H.; Saikia, S.; Grant, M. J.; Longuski, J. M.

    2014-06-01

    This paper compares the results of trajectory optimization for Adaptive Deployable Entry and Placement Technology (ADEPT) using different control methods. ADEPT addresses the limitations of current EDL technology in delivering heavy payloads to Mars.

  9. ROAMing terrain (Real-time Optimally Adapting Meshes)

    SciTech Connect

    Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.; Miller, M.C.; Aldrich, C.; Mineev, M.

    1997-07-01

    Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor stimulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly, and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adapting Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM execution time is proportionate to the number of triangle changes per frame, which is typically a few percent of the output mesh size, hence ROAM performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.

  10. Multidimensional Adaptive Testing with Optimal Design Criteria for Item Selection

    ERIC Educational Resources Information Center

    Mulder, Joris; van der Linden, Wim J.

    2009-01-01

    Several criteria from the optimal design literature are examined for use with item selection in multidimensional adaptive testing. In particular, it is examined what criteria are appropriate for adaptive testing in which all abilities are intentional, some should be considered as a nuisance, or the interest is in the testing of a composite of the…

  11. Adaptive optimization and control using neural networks

    SciTech Connect

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  12. Reliability Optimization Design for Contact Springs of AC Contactors Based on Adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Sheng; Su, Xiuping; Wu, Ziran; Xu, Chengwen

    The paper illustrates the procedure of reliability optimization modeling for contact springs of AC contactors under nonlinear multi-constraint conditions. The adaptive genetic algorithm (AGA) is utilized to perform reliability optimization on the contact spring parameters of a type of AC contactor. A method that changes crossover and mutation rates at different times in the AGA can effectively avoid premature convergence, and experimental tests are performed after optimization. The experimental result shows that the mass of each optimized spring is reduced by 16.2%, while the reliability increases to 99.9% from 94.5%. The experimental result verifies the correctness and feasibility of this reliability optimization designing method.

  13. Adaptation and optimal chemotactic strategy for {ital E. coli}

    SciTech Connect

    Strong, S.P.; Bialek, William; Koberle, R. Freedman, B.

    1998-04-01

    Extending the classic works of Berg and Purcell on the biophysics of bacterial chemotaxis, we find the optimal chemotactic strategy for the peritrichous bacterium {ital E. coli} in the high and low signal to noise ratio limits. The optimal strategy depends on properties of the environment and properties of the individual bacterium, and is therefore highly adaptive. We review experiments relevant to testing both the form of the proposed strategy and its adaptability, and propose extensions of them which could test the limits of the adaptability in this simplest sensory processing system. {copyright} {ital 1998} {ital The American Physical Society}

  14. Adaptive-optics performance of Antarctic telescopes.

    PubMed

    Lawrence, Jon S

    2004-02-20

    The performance of natural guide star adaptive-optics systems for telescopes located on the Antarctic plateau is evaluated and compared with adaptive-optics systems operated with the characteristic mid-latitude atmosphere found at Mauna Kea. A 2-m telescope with tip-tilt correction and an 8-m telescope equipped with a high-order adaptive-optics system are considered. Because of the large isoplanatic angle of the South Pole atmosphere, the anisoplanatic error associated with an adaptive-optics correction is negligible, and the achievable resolution is determined only by the fitting error associated with the number of corrected wave-front modes, which depends on the number of actuators on the deformable mirror. The usable field of view of an adaptive-optics equipped Antarctic telescope is thus orders of magnitude larger than for a similar telescope located at a mid-latitude site; this large field of view obviates the necessity for multiconjugate adaptive-optics systems that use multiple laser guide stars. These results, combined with the low infrared sky backgrounds, indicate that the Antarctic plateau is the best site on Earth at which to perform high-resolution imaging with large telescopes, either over large fields of view or with appreciable sky coverage. Preliminary site-testing results obtained recently from the Dome Concordia station indicate that this site is far superior to even the South Pole. PMID:15008551

  15. Adaptive control based on retrospective cost optimization

    NASA Technical Reports Server (NTRS)

    Santillo, Mario A. (Inventor); Bernstein, Dennis S. (Inventor)

    2012-01-01

    A discrete-time adaptive control law for stabilization, command following, and disturbance rejection that is effective for systems that are unstable, MIMO, and/or nonminimum phase. The adaptive control algorithm includes guidelines concerning the modeling information needed for implementation. This information includes the relative degree, the first nonzero Markov parameter, and the nonminimum-phase zeros. Except when the plant has nonminimum-phase zeros whose absolute value is less than the plant's spectral radius, the required zero information can be approximated by a sufficient number of Markov parameters. No additional information about the poles or zeros need be known. Numerical examples are presented to illustrate the algorithm's effectiveness in handling systems with errors in the required modeling data, unknown latency, sensor noise, and saturation.

  16. An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan

    2008-01-01

    This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.

  17. Camelina: Adaptation and performance of genotypes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Camelina (Camelina sativa L. Crantz) has shown potential as an alternative and biofuel crop in cereal-based cropping systems. Our study investigated the adaption, performance, and yield stability among camelina genotypes across diverse US Pacific Northwest (PNW) environments. Seven named camelina ge...

  18. Identifying performance bottlenecks on modern microarchitectures using an adaptable probe

    SciTech Connect

    Griem, Gorden; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2004-01-20

    The gap between peak and delivered performance for scientific applications running on microprocessor-based systems has grown considerably in recent years. The inability to achieve the desired performance even on a single processor is often attributed to an inadequate memory system, but without identification or quantification of a specific bottleneck. In this work, we use an adaptable synthetic benchmark to isolate application characteristics that cause a significant drop in performance, giving application programmers and architects information about possible optimizations. Our adaptable probe, called sqmat, uses only four parameters to capture key characteristics of scientific workloads: working-set size, computational intensity, indirection, and irregularity. This paper describes the implementation of sqmat and uses its tunable parameters to evaluate four leading 64-bit microprocessors that are popular building blocks for current high performance systems: Intel Itanium2, AMD Opteron, IBM Power3, and IBM Power4.

  19. An adaptive response surface method for crashworthiness optimization

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Yang, Ren-Jye; Zhu, Ping

    2013-11-01

    Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.

  20. Frequency domain synthesis of optimal inputs for adaptive identification and control

    NASA Technical Reports Server (NTRS)

    Fu, Li-Chen; Sastry, Shankar

    1987-01-01

    The input design problem of selecting appropriate inputs for use in SISO adaptive identification and model reference adaptive control algorithms is considered. Averaging theory is used to characterize the optimal inputs in the frequency domain. The design problem is formulated as an optimization problem which maximizes the smallest eigenvalue of the average information matrix over power constrained signals, and the global optimal solution is obtained using a convergent numerical algorithm. A bound on the frequency search range required in the design algorithm has been determined in terms of the desired performance.

  1. Metabolic Adaptation Processes That Converge to Optimal Biomass Flux Distributions

    PubMed Central

    Altafini, Claudio; Facchetti, Giuseppe

    2015-01-01

    In simple organisms like E.coli, the metabolic response to an external perturbation passes through a transient phase in which the activation of a number of latent pathways can guarantee survival at the expenses of growth. Growth is gradually recovered as the organism adapts to the new condition. This adaptation can be modeled as a process of repeated metabolic adjustments obtained through the resilencings of the non-essential metabolic reactions, using growth rate as selection probability for the phenotypes obtained. The resulting metabolic adaptation process tends naturally to steer the metabolic fluxes towards high growth phenotypes. Quite remarkably, when applied to the central carbon metabolism of E.coli, it follows that nearly all flux distributions converge to the flux vector representing optimal growth, i.e., the solution of the biomass optimization problem turns out to be the dominant attractor of the metabolic adaptation process. PMID:26340476

  2. Laser tomography adaptive optics: a performance study.

    PubMed

    Tatulli, Eric; Ramaprakash, A N

    2013-12-01

    We present an analytical derivation of the on-axis performance of adaptive optics systems using a given number of guide stars of arbitrary altitude, distributed at arbitrary angular positions in the sky. The expressions of the residual error are given for cases of both continuous and discrete turbulent atmospheric profiles. Assuming Shack-Hartmann wavefront sensing with circular apertures, we demonstrate that the error is formally described by integrals of products of three Bessel functions. We compare the performance of adaptive optics correction when using natural, sodium, or Rayleigh laser guide stars. For small diameter class telescopes (≲5 m), we show that a small number of Rayleigh beacons can provide similar performance to that of a single sodium laser, for a lower overall cost of the instrument. For bigger apertures, using Rayleigh stars may not be such a suitable alternative because of the too severe cone effect that drastically degrades the quality of the correction. PMID:24323009

  3. A hybrid method for optimization of the adaptive Goldstein filter

    NASA Astrophysics Data System (ADS)

    Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue

    2014-12-01

    The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.

  4. Identification-free adaptive optimal control based on switching predictive models

    NASA Astrophysics Data System (ADS)

    Luo, Wenguang; Pan, Shenghui; Ma, Zhaomin; Lan, Hongli

    2008-10-01

    An identification-free adaptive optimal control based on switching predictive models is proposed for the systems with big inertia, long time delay and multi models. Multi predictive models are set in the identification-free adaptive predictive control, and switched according to the optimal switching instants in control of the switching law along with the system running situations in real time. The switching law is designed based on the most important character parameter of the systems, and the optimal switching instants are computed out with the optimal theory for switched systems. The simulation test results show the proposed method is suitable to the systems, such as superheated steam temperature systems of electric power plants, can provide excellent control performance, improve rejecting disturbance ability and self-adaptability, and has lower demand on the predictive model precision.

  5. Performance predictions for the Keck telescope adaptive optics system

    SciTech Connect

    Gavel, D.T.; Olivier, S.S.

    1995-08-07

    The second Keck ten meter telescope (Keck-11) is slated to have an infrared-optimized adaptive optics system in the 1997--1998 time frame. This system will provide diffraction-limited images in the 1--3 micron region and the ability to use a diffraction-limited spectroscopy slit. The AO system is currently in the preliminary design phase and considerable analysis has been performed in order to predict its performance under various seeing conditions. In particular we have investigated the point-spread function, energy through a spectroscopy slit, crowded field contrast, object limiting magnitude, field of view, and sky coverage with natural and laser guide stars.

  6. Task Performance in Astronomical Adaptive Optics

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, J. C.; Caucci, Luca

    2010-01-01

    In objective or task-based assessment of image quality, figures of merit are defined by the performance of some specific observer on some task of scientific interest. This methodology is well established in medical imaging but is just beginning to be applied in astronomy. In this paper we survey the theory needed to understand the performance of ideal or ideal-linear (Hotelling) observers on detection tasks with adaptive-optical data. The theory is illustrated by discussing its application to detection of exoplanets from a sequence of short-exposure images. PMID:20890393

  7. Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach

    PubMed Central

    Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.

    2014-01-01

    Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856

  8. Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach.

    PubMed

    Cavagnaro, Daniel R; Gonzalez, Richard; Myung, Jay I; Pitt, Mark A

    2013-02-01

    Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856

  9. Optimal Pid Controller Design Using Adaptive Vurpso Algorithm

    NASA Astrophysics Data System (ADS)

    Zirkohi, Majid Moradi

    2015-04-01

    The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.

  10. Sootblowing optimization for improved boiler performance

    DOEpatents

    James, John Robert; McDermott, John; Piche, Stephen; Pickard, Fred; Parikh, Neel J

    2013-07-30

    A sootblowing control system that uses predictive models to bridge the gap between sootblower operation and boiler performance goals. The system uses predictive modeling and heuristics (rules) associated with different zones in a boiler to determine an optimal sequence of sootblower operations and achieve boiler performance targets. The system performs the sootblower optimization while observing any operational constraints placed on the sootblowers.

  11. Sootblowing optimization for improved boiler performance

    DOEpatents

    James, John Robert; McDermott, John; Piche, Stephen; Pickard, Fred; Parikh, Neel J.

    2012-12-25

    A sootblowing control system that uses predictive models to bridge the gap between sootblower operation and boiler performance goals. The system uses predictive modeling and heuristics (rules) associated with different zones in a boiler to determine an optimal sequence of sootblower operations and achieve boiler performance targets. The system performs the sootblower optimization while observing any operational constraints placed on the sootblowers.

  12. Autonomic care platform for optimizing query performance

    PubMed Central

    2013-01-01

    Background As the amount of information in electronic health care systems increases, data operations get more complicated and time-consuming. Intensive Care platforms require a timely processing of data retrievals to guarantee the continuous display of recent data of patients. Physicians and nurses rely on this data for their decision making. Manual optimization of query executions has become difficult to handle due to the increased amount of queries across multiple sources. Hence, a more automated management is necessary to increase the performance of database queries. The autonomic computing paradigm promises an approach in which the system adapts itself and acts as self-managing entity, thereby limiting human interventions and taking actions. Despite the usage of autonomic control loops in network and software systems, this approach has not been applied so far for health information systems. Methods We extend the COSARA architecture, an infection surveillance and antibiotic management service platform for the Intensive Care Unit (ICU), with self-managed components to increase the performance of data retrievals. We used real-life ICU COSARA queries to analyse slow performance and measure the impact of optimizations. Each day more than 2 million COSARA queries are executed. Three control loops, which monitor the executions and take action, have been proposed: reactive, deliberative and reflective control loops. We focus on improvements of the execution time of microbiology queries directly related to the visual displays of patients’ data on the bedside screens. Results The results show that autonomic control loops are beneficial for the optimizations in the data executions in the ICU. The application of reactive control loop results in a reduction of 8.61% of the average execution time of microbiology results. The combined application of the reactive and deliberative control loop results in an average query time reduction of 10.92% and the combination of

  13. Topology optimization of pressure adaptive honeycomb for a morphing flap

    NASA Astrophysics Data System (ADS)

    Vos, Roelof; Scheepstra, Jan; Barrett, Ron

    2011-03-01

    The paper begins with a brief historical overview of pressure adaptive materials and structures. By examining avian anatomy, it is seen that pressure-adaptive structures have been used successfully in the Natural world to hold structural positions for extended periods of time and yet allow for dynamic shape changes from one flight state to the next. More modern pneumatic actuators, including FAA certified autopilot servoactuators are frequently used by aircraft around the world. Pneumatic artificial muscles (PAM) show good promise as aircraft actuators, but follow the traditional model of load concentration and distribution commonly found in aircraft. A new system is proposed which leaves distributed loads distributed and manipulates structures through a distributed actuator. By using Pressure Adaptive Honeycomb (PAH), it is shown that large structural deformations in excess of 50% strains can be achieved while maintaining full structural integrity and enabling secondary flight control mechanisms like flaps. The successful implementation of pressure-adaptive honeycomb in the trailing edge of a wing section sparked the motivation for subsequent research into the optimal topology of the pressure adaptive honeycomb within the trailing edge of a morphing flap. As an input for the optimization two known shapes are required: a desired shape in cruise configuration and a desired shape in landing configuration. In addition, the boundary conditions and load cases (including aerodynamic loads and internal pressure loads) should be specified for each condition. Finally, a set of six design variables is specified relating to the honeycomb and upper skin topology of the morphing flap. A finite-element model of the pressure-adaptive honeycomb structure is developed specifically tailored to generate fast but reliable results for a given combination of external loading, input variables, and boundary conditions. Based on two bench tests it is shown that this model correlates well

  14. Optimizing Input/Output Using Adaptive File System Policies

    NASA Technical Reports Server (NTRS)

    Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.

    1996-01-01

    Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.

  15. An optimized, universal hardware-based adaptive correlation receiver architecture

    NASA Astrophysics Data System (ADS)

    Zhu, Zaidi; Suarez, Hernan; Zhang, Yan; Wang, Shang

    2014-05-01

    The traditional radar RF transceivers, similar to communication transceivers, have the basic elements such as baseband waveform processing, IF/RF up-down conversion, transmitter power circuits, receiver front-ends, and antennas, which are shown in the upper half of Figure 1. For modern radars with diversified and sophisticated waveforms, we can frequently observe that the transceiver behaviors, especially nonlinear behaviors, are depending on the waveform amplitudes, frequency contents and instantaneous phases. Usually, it is a troublesome process to tune an RF transceiver to optimum when different waveforms are used. Another issue arises from the interference caused by the waveforms - for example, the range side-lobe (RSL) caused by the waveforms, once the signals pass through the entire transceiver chain, may be further increased due to distortions. This study is inspired by the two existing solutions from commercial communication industry, digital pre-distortion (DPD) and adaptive channel estimation and Interference Mitigation (AIM), while combining these technologies into a single chip or board that can be inserted into the existing transceiver system. This device is then named RF Transceiver Optimizer (RTO). The lower half of Figure 1 shows the basic element of RTO. With RTO, the digital baseband processing does not need to take into account the transceiver performance with diversified waveforms, such as the transmitter efficiency and chain distortion (and the intermodulation products caused by distortions). Neither does it need to concern the pulse compression (or correlation receiver) process and the related mitigation. The focus is simply the information about the ground truth carried by the main peak of correlation receiver outputs. RTO can be considered as an extension of the existing calibration process, while it has the benefits of automatic, adaptive and universal. Currently, the main techniques to implement the RTO are the digital pre- or -post

  16. Optimized quantum sensing with a single electron spin using real-time adaptive measurements

    NASA Astrophysics Data System (ADS)

    Bonato, C.; Blok, M. S.; Dinani, H. T.; Berry, D. W.; Markham, M. L.; Twitchen, D. J.; Hanson, R.

    2016-03-01

    Quantum sensors based on single solid-state spins promise a unique combination of sensitivity and spatial resolution. The key challenge in sensing is to achieve minimum estimation uncertainty within a given time and with high dynamic range. Adaptive strategies have been proposed to achieve optimal performance, but their implementation in solid-state systems has been hindered by the demanding experimental requirements. Here, we realize adaptive d.c. sensing by combining single-shot readout of an electron spin in diamond with fast feedback. By adapting the spin readout basis in real time based on previous outcomes, we demonstrate a sensitivity in Ramsey interferometry surpassing the standard measurement limit. Furthermore, we find by simulations and experiments that adaptive protocols offer a distinctive advantage over the best known non-adaptive protocols when overhead and limited estimation time are taken into account. Using an optimized adaptive protocol we achieve a magnetic field sensitivity of 6.1 ± 1.7 nT Hz-1/2 over a wide range of 1.78 mT. These results open up a new class of experiments for solid-state sensors in which real-time knowledge of the measurement history is exploited to obtain optimal performance.

  17. Performance of adaptive optics at Lick Observatory

    SciTech Connect

    Olivier, S.S.; An, J.; Avicola, K.

    1994-03-01

    A prototype adaptive optics system has been developed at Lawrence Livermore National Laboratory (LLNL) for use at Lick Observatory. This system is based on an ITEX 69-actuator continuous-surface deformable mirror, a Kodak fast-framing intensified CCD camera, and a Mercury VME board containing four Intel i860 processors. The system has been tested using natural reference stars on the 40-inch Nickel telescope at Lick Observatory yielding up to a factor of 10 increase in image peak intensity and a factor of 6 reduction in image full width at half maximum (FWHM). These results are consistent with theoretical expectations. In order to improve performance, the intensified CCD camera will be replaced by a high-quantum-efficiency low-noise fast CCD camera built for LLNL by Adaptive Optics Associates using a chip developed by Lincoln Laboratory, and the 69-actuator deformable mirror will be replaced by a 127-actuator deformable mirror developed at LLNL. With these upgrades, the system should perform well in median seeing conditions on the 120-inch Shane telescope for observing wavelengths longer than {approximately}1 {mu}m and using natural reference stars brighter than m{sub R} {approximately} 10 or using the laser currently being developed at LLNL for use at Lick Observatory to generate a sodium-layer reference star.

  18. Road map to adaptive optimal control. [jet engine control

    NASA Technical Reports Server (NTRS)

    Boyer, R.

    1980-01-01

    A building block control structure leading toward adaptive, optimal control for jet engines is developed. This approach simplifies the addition of new features and allows for easier checkout of the control by providing a baseline system for comparison. Also, it is possible to eliminate certain features that do not have payoff by being selective in the addition of new building blocks to be added to the baseline system. The minimum risk approach specifically addresses the need for active identification of the plant to be controlled in real time and real time optimization of the control for the identified plant.

  19. A Hierarchical Adaptive Approach to Optimal Experimental Design

    PubMed Central

    Kim, Woojae; Pitt, Mark A.; Lu, Zhong-Lin; Steyvers, Mark; Myung, Jay I.

    2014-01-01

    Experimentation is at the core of research in the behavioral and neural sciences, yet observations can be expensive and time-consuming to acquire (e.g., MRI scans, responses from infant participants). A major interest of researchers is designing experiments that lead to maximal accumulation of information about the phenomenon under study with the fewest possible number of observations. In addressing this challenge, statisticians have developed adaptive design optimization methods. This letter introduces a hierarchical Bayes extension of adaptive design optimization that provides a judicious way to exploit two complementary schemes of inference (with past and future data) to achieve even greater accuracy and efficiency in information gain. We demonstrate the method in a simulation experiment in the field of visual perception. PMID:25149697

  20. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  1. Achieving Optimal Self-Adaptivity for Dynamic Tuning of Organic Semiconductors through Resonance Engineering.

    PubMed

    Tao, Ye; Xu, Lijia; Zhang, Zhen; Chen, Runfeng; Li, Huanhuan; Xu, Hui; Zheng, Chao; Huang, Wei

    2016-08-01

    Current static-state explorations of organic semiconductors for optimal material properties and device performance are hindered by limited insights into the dynamically changed molecular states and charge transport and energy transfer processes upon device operation. Here, we propose a simple yet successful strategy, resonance variation-based dynamic adaptation (RVDA), to realize optimized self-adaptive properties in donor-resonance-acceptor molecules by engineering the resonance variation for dynamic tuning of organic semiconductors. Organic light-emitting diodes hosted by these RVDA materials exhibit remarkably high performance, with external quantum efficiencies up to 21.7% and favorable device stability. Our approach, which supports simultaneous realization of dynamically adapted and selectively enhanced properties via resonance engineering, illustrates a feasible design map for the preparation of smart organic semiconductors capable of dynamic structure and property modulations, promoting the studies of organic electronics from static to dynamic. PMID:27403886

  2. A new adaptive hybrid electromagnetic damper: modelling, optimization, and experiment

    NASA Astrophysics Data System (ADS)

    Asadi, Ehsan; Ribeiro, Roberto; Behrad Khamesee, Mir; Khajepour, Amir

    2015-07-01

    This paper presents the development of a new electromagnetic hybrid damper which provides regenerative adaptive damping force for various applications. Recently, the introduction of electromagnetic technologies to the damping systems has provided researchers with new opportunities for the realization of adaptive semi-active damping systems with the added benefit of energy recovery. In this research, a hybrid electromagnetic damper is proposed. The hybrid damper is configured to operate with viscous and electromagnetic subsystems. The viscous medium provides a bias and fail-safe damping force while the electromagnetic component adds adaptability and the capacity for regeneration to the hybrid design. The electromagnetic component is modeled and analyzed using analytical (lumped equivalent magnetic circuit) and electromagnetic finite element method (FEM) (COMSOL® software package) approaches. By implementing both modeling approaches, an optimization for the geometric aspects of the electromagnetic subsystem is obtained. Based on the proposed electromagnetic hybrid damping concept and the preliminary optimization solution, a prototype is designed and fabricated. A good agreement is observed between the experimental and FEM results for the magnetic field distribution and electromagnetic damping forces. These results validate the accuracy of the modeling approach and the preliminary optimization solution. An analytical model is also presented for viscous damping force, and is compared with experimental results The results show that the damper is able to produce damping coefficients of 1300 and 0-238 N s m-1 through the viscous and electromagnetic components, respectively.

  3. A Novel Adaptive Cuckoo Search for Optimal Query Plan Generation

    PubMed Central

    Gomathi, Ramalingam; Sharmila, Dhandapani

    2014-01-01

    The emergence of multiple web pages day by day leads to the development of the semantic web technology. A World Wide Web Consortium (W3C) standard for storing semantic web data is the resource description framework (RDF). To enhance the efficiency in the execution time for querying large RDF graphs, the evolving metaheuristic algorithms become an alternate to the traditional query optimization methods. This paper focuses on the problem of query optimization of semantic web data. An efficient algorithm called adaptive Cuckoo search (ACS) for querying and generating optimal query plan for large RDF graphs is designed in this research. Experiments were conducted on different datasets with varying number of predicates. The experimental results have exposed that the proposed approach has provided significant results in terms of query execution time. The extent to which the algorithm is efficient is tested and the results are documented. PMID:25215330

  4. A novel adaptive Cuckoo search for optimal query plan generation.

    PubMed

    Gomathi, Ramalingam; Sharmila, Dhandapani

    2014-01-01

    The emergence of multiple web pages day by day leads to the development of the semantic web technology. A World Wide Web Consortium (W3C) standard for storing semantic web data is the resource description framework (RDF). To enhance the efficiency in the execution time for querying large RDF graphs, the evolving metaheuristic algorithms become an alternate to the traditional query optimization methods. This paper focuses on the problem of query optimization of semantic web data. An efficient algorithm called adaptive Cuckoo search (ACS) for querying and generating optimal query plan for large RDF graphs is designed in this research. Experiments were conducted on different datasets with varying number of predicates. The experimental results have exposed that the proposed approach has provided significant results in terms of query execution time. The extent to which the algorithm is efficient is tested and the results are documented. PMID:25215330

  5. Adaptive Multi-Agent Systems for Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Bieniawski, Stefan; Wolpert, David H.

    2004-01-01

    Product Distribution (PD) theory is a new framework for analyzing and controlling distributed systems. Here we demonstrate its use for distributed stochastic optimization. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. The updating of the Lagrange parameters in the Lagrangian can be viewed as a form of automated annealing, that focuses the MAS more and more on the optimal pure strategy. This provides a simple way to map the solution of any constrained optimization problem onto the equilibrium of a Multi-Agent System (MAS). We present computer experiments involving both the Queen s problem and K-SAT validating the predictions of PD theory and its use for off-the-shelf distributed adaptive optimization.

  6. Trajectory Planning and Optimized Adaptive Control for a Class of Wheeled Inverted Pendulum Vehicle Models.

    PubMed

    Yang, Chenguang; Li, Zhijun; Li, Jing

    2013-02-01

    In this paper, we investigate optimized adaptive control and trajectory generation for a class of wheeled inverted pendulum (WIP) models of vehicle systems. Aiming at shaping the controlled vehicle dynamics to be of minimized motion tracking errors as well as angular accelerations, we employ the linear quadratic regulation optimization technique to obtain an optimal reference model. Adaptive control has then been developed using variable structure method to ensure the reference model to be exactly matched in a finite-time horizon, even in the presence of various internal and external uncertainties. The minimized yaw and tilt angular accelerations help to enhance the vehicle rider's comfort. In addition, due to the underactuated mechanism of WIP, the vehicle forward velocity dynamics cannot be controlled separately from the pendulum tilt angle dynamics. Inspired by the control strategy of human drivers, who usually manipulate the tilt angle to control the forward velocity, we design a neural-network-based adaptive generator of implicit control trajectory (AGICT) of the tilt angle which indirectly "controls" the forward velocity such that it tracks the desired velocity asymptotically. The stability and optimal tracking performance have been rigorously established by theoretic analysis. In addition, simulation studies have been carried out to demonstrate the efficiency of the developed AGICT and optimized adaptive controller. PMID:22695357

  7. Optimizing raid performance with cache

    NASA Technical Reports Server (NTRS)

    Bouzari, Alex

    1994-01-01

    We live in a world of increasingly complex applications and operating systems. Information is increasing at a mind-boggling rate. The consolidation of text, voice, and imaging represents an even greater challenge for our information systems. Which forced us to address three important questions: Where do we store all this information? How do we access it? And, how do we protect it against the threat of loss or damage? Introduced in the 1980s, RAID (Redundant Arrays of Independent Disks) represents a cost-effective solution to the needs of the information age. While fulfilling expectations for high storage, and reliability, RAID is sometimes subject to criticisms in the area of performance. However, there are design elements that can significantly enhance performance. They can be subdivided into two areas: (1) RAID levels or basic architecture. And, (2) enhancement schemes such as intelligent caching, support of tagged command queuing, and use of SCSI-2 Fast and Wide features.

  8. Implementation and Performance Issues in Collaborative Optimization

    NASA Technical Reports Server (NTRS)

    Braun, Robert; Gage, Peter; Kroo, Ilan; Sobieski, Ian

    1996-01-01

    Collaborative optimization is a multidisciplinary design architecture that is well-suited to large-scale multidisciplinary optimization problems. This paper compares this approach with other architectures, examines the details of the formulation, and some aspects of its performance. A particular version of the architecture is proposed to better accommodate the occurrence of multiple feasible regions. The use of system level inequality constraints is shown to increase the convergence rate. A series of simple test problems, demonstrated to challenge related optimization architectures, is successfully solved with collaborative optimization.

  9. Optimal control law for classical and multiconjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Le Roux, Brice; Conan, Jean-Marc; Kulcsár, Caroline; Raynaud, Henri-François; Mugnier, Laurent M.; Fusco, Thierry

    2004-07-01

    Classical adaptive optics (AO) is now a widespread technique for high-resolution imaging with astronomical ground-based telescopes. It generally uses simple and efficient control algorithms. Multiconjugate adaptive optics (MCAO) is a more recent and very promising technique that should extend the corrected field of view. This technique has not yet been experimentally validated, but simulations already show its high potential. The importance for MCAO of an optimal reconstruction using turbulence spatial statistics has already been demonstrated through open-loop simulations. We propose an optimal closed-loop control law that accounts for both spatial and temporal statistics. The prior information on the turbulence, as well as on the wave-front sensing noise, is expressed in a state-space model. The optimal phase estimation is then given by a Kalman filter. The equations describing the system are given and the underlying assumptions explained. The control law is then derived. The gain brought by this approach is demonstrated through MCAO numerical simulations representative of astronomical observation on a 8-m-class telescope in the near infrared. We also discuss the application of this control approach to classical AO. Even in classical AO, the technique could be relevant especially for future extreme AO systems.

  10. Optimal control law for classical and multiconjugate adaptive optics.

    PubMed

    Le Roux, Brice; Conan, Jean-Marc; Kulcsár, Caroline; Raynaud, Henri-François; Mugnier, Laurent M; Fusco, Thierry

    2004-07-01

    Classical adaptive optics (AO) is now a widespread technique for high-resolution imaging with astronomical ground-based telescopes. It generally uses simple and efficient control algorithms. Multiconjugate adaptive optics (MCAO) is a more recent and very promising technique that should extend the corrected field of view. This technique has not yet been experimentally validated, but simulations already show its high potential. The importance for MCAO of an optimal reconstruction using turbulence spatial statistics has already been demonstrated through open-loop simulations. We propose an optimal closed-loop control law that accounts for both spatial and temporal statistics. The prior information on the turbulence, as well as on the wave-front sensing noise, is expressed in a state-space model. The optimal phase estimation is then given by a Kalman filter. The equations describing the system are given and the underlying assumptions explained. The control law is then derived. The gain brought by this approach is demonstrated through MCAO numerical simulations representative of astronomical observation on a 8-m-class telescope in the near infrared. We also discuss the application of this control approach to classical AO. Even in classical AO, the technique could be relevant especially for future extreme AO systems. PMID:15260258

  11. Modeling for deformable mirrors and the adaptive optics optimization program

    SciTech Connect

    Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.

    1997-03-18

    We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language.

  12. Performance optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.

    1991-01-01

    As part of a center-wide activity at NASA Langley Research Center to develop multidisciplinary design procedures by accounting for discipline interactions, a performance design optimization procedure is developed. The procedure optimizes the aerodynamic performance of rotor blades by selecting the point of taper initiation, root chord, taper ratio, and maximum twist which minimize hover horsepower while not degrading forward flight performance. The procedure uses HOVT (a strip theory momentum analysis) to compute the horse power required for hover and the comprehensive helicopter analysis program CAMRAD to compute the horsepower required for forward flight and maneuver. The optimization algorithm consists of the general purpose optimization program CONMIN and approximate analyses. Sensitivity analyses consisting of derivatives of the objective function and constraints are carried out by forward finite differences. The procedure is applied to a test problem which is an analytical model of a wind tunnel model of a utility rotor blade.

  13. Pulsed Inductive Plasma Acceleration: Performance Optimization Criteria

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.

    2014-01-01

    Optimization criteria for pulsed inductive plasma acceleration are developed using an acceleration model consisting of a set of coupled circuit equations describing the time-varying current in the thruster and a one-dimensional momentum equation. The model is nondimensionalized, resulting in the identification of several scaling parameters that are varied to optimize the performance of the thruster. The analysis reveals the benefits of underdamped current waveforms and leads to a performance optimization criterion that requires the matching of the natural period of the discharge and the acceleration timescale imposed by the inertia of the working gas. In addition, the performance increases when a greater fraction of the propellant is initially located nearer to the inductive acceleration coil. While the dimensionless model uses a constant temperature formulation in calculating performance, the scaling parameters that yield the optimum performance are shown to be relatively invariant if a self-consistent description of energy in the plasma is instead used.

  14. Circadian clocks optimally adapt to sunlight for reliable synchronization

    PubMed Central

    Hasegawa, Yoshihiko; Arita, Masanori

    2014-01-01

    Circadian oscillation provides selection advantages through synchronization to the daylight cycle. However, a reliable clock must be designed through two conflicting properties: entrainability to synchronize internal time with periodic stimuli such as sunlight, and regularity to oscillate with a precise period. These two aspects do not easily coexist, because better entrainability favours higher sensitivity which may sacrifice regularity. To investigate conditions for satisfying the two properties, we analytically calculated the optimal phase–response curve with a variational method. Our results indicate an existence of a dead zone, i.e. a time period during which input stimuli neither advance nor delay the clock. A dead zone appears only when input stimuli obey the time course of actual solar radiation, but a simple sine curve cannot yield a dead zone. Our calculation demonstrates that every circadian clock with a dead zone is optimally adapted to the daylight cycle. PMID:24352677

  15. A Computational Model of Optimal Vein Graft Adaptation in an Arterial Environment

    NASA Astrophysics Data System (ADS)

    Ramachandra, Abhay B.; Sankaran, Sethuraman; Humphrey, Jay; Marsden, Alison

    2012-11-01

    In coronary artery disease, surgical revascularization using venous bypass grafts is performed to relieve symptoms and prolong life. Coronary bypass graft surgery is performed on approximately 500,000 people every year in the United States, with graft failure rates as high as 50% within 5 years. When a vein graft is implanted in the arterial system it adapts to the high flow rate and high pressure of the arterial environment by changing composition and geometry, and thus stiffness. Hemodynamic loads, resulting in altered wall shear and intramural stresses, are major factors impacting vein graft remodeling. Here, a constrained mixture theory of growth and remodeling for arteries is extended to model the evolution of a vein graft subjected to arterial flow and pressure conditions. A derivative-free optimization method is used to estimate the optimal set of constitutive parameters that best match passive biaxial mouse inferior vena cava data from experiments. Optimization is performed using surrogate management framework, a pattern search method with established convergence theory. The resulting parameter set is used to predict optimal vein adaptation in an arterial environment for two illustrative cases: a) Step change b) Gradual change in loading. Results are compared against vein graft data from the literature and a possible set of mechanisms for sub-optimal vein graft remodeling is suggested.

  16. Efficient retrieval of landscape Hessian: forced optimal covariance adaptive learning.

    PubMed

    Shir, Ofer M; Roslund, Jonathan; Whitley, Darrell; Rabitz, Herschel

    2014-06-01

    Knowledge of the Hessian matrix at the landscape optimum of a controlled physical observable offers valuable information about the system robustness to control noise. The Hessian can also assist in physical landscape characterization, which is of particular interest in quantum system control experiments. The recently developed landscape theoretical analysis motivated the compilation of an automated method to learn the Hessian matrix about the global optimum without derivative measurements from noisy data. The current study introduces the forced optimal covariance adaptive learning (FOCAL) technique for this purpose. FOCAL relies on the covariance matrix adaptation evolution strategy (CMA-ES) that exploits covariance information amongst the control variables by means of principal component analysis. The FOCAL technique is designed to operate with experimental optimization, generally involving continuous high-dimensional search landscapes (≳30) with large Hessian condition numbers (≳10^{4}). This paper introduces the theoretical foundations of the inverse relationship between the covariance learned by the evolution strategy and the actual Hessian matrix of the landscape. FOCAL is presented and demonstrated to retrieve the Hessian matrix with high fidelity on both model landscapes and quantum control experiments, which are observed to possess nonseparable, nonquadratic search landscapes. The recovered Hessian forms were corroborated by physical knowledge of the systems. The implications of FOCAL extend beyond the investigated studies to potentially cover other physically motivated multivariate landscapes. PMID:25019911

  17. Optimization of the performances of correlation filters by pre-processing the input plane

    NASA Astrophysics Data System (ADS)

    Bouzidi, F.; Elbouz, M.; Alfalou, A.; Brosseau, C.; Fakhfakh, A.

    2016-01-01

    We report findings on the optimization of the performances of correlation filters. First, we propound and validate an optimization of ROC curves adapted to correlation technique. Then, analysis suggests that a pre-processing of the input plane leads to a compromise between the robustness of the adapted filter and the discrimination of the inverse filter for face recognition applications. Rewardingly, our technical results demonstrate that this method is remarkably efficient to increase the performances of a VanderLugt correlator.

  18. Optimizing Satellite Communications With Adaptive and Phased Array Antennas

    NASA Technical Reports Server (NTRS)

    Ingram, Mary Ann; Romanofsky, Robert; Lee, Richard Q.; Miranda, Felix; Popovic, Zoya; Langley, John; Barott, William C.; Ahmed, M. Usman; Mandl, Dan

    2004-01-01

    A new adaptive antenna array architecture for low-earth-orbiting satellite ground stations is being investigated. These ground stations are intended to have no moving parts and could potentially be operated in populated areas, where terrestrial interference is likely. The architecture includes multiple, moderately directive phased arrays. The phased arrays, each steered in the approximate direction of the satellite, are adaptively combined to enhance the Signal-to-Noise and Interference-Ratio (SNIR) of the desired satellite. The size of each phased array is to be traded-off with the number of phased arrays, to optimize cost, while meeting a bit-error-rate threshold. Also, two phased array architectures are being prototyped: a spacefed lens array and a reflect-array. If two co-channel satellites are in the field of view of the phased arrays, then multi-user detection techniques may enable simultaneous demodulation of the satellite signals, also known as Space Division Multiple Access (SDMA). We report on Phase I of the project, in which fixed directional elements are adaptively combined in a prototype to demodulate the S-band downlink of the EO-1 satellite, which is part of the New Millennium Program at NASA.

  19. Optimal spectral tracking--adapting to dynamic regime change.

    PubMed

    Brittain, John-Stuart; Halliday, David M

    2011-01-30

    Real world data do not always obey the statistical restraints imposed upon them by sophisticated analysis techniques. In spectral analysis for instance, an ergodic process--the interchangeability of temporal for spatial averaging--is assumed for a repeat-trial design. Many evolutionary scenarios, such as learning and motor consolidation, do not conform to such linear behaviour and should be approached from a more flexible perspective. To this end we previously introduced the method of optimal spectral tracking (OST) in the study of trial-varying parameters. In this extension to our work we modify the OST routines to provide an adaptive implementation capable of reacting to dynamic transitions in the underlying system state. In so doing, we generalise our approach to characterise both slow-varying and rapid fluctuations in time-series, simultaneously providing a metric of system stability. The approach is first applied to a surrogate dataset and compared to both our original non-adaptive solution and spectrogram approaches. The adaptive OST is seen to display fast convergence and desirable statistical properties. All three approaches are then applied to a neurophysiological recording obtained during a study on anaesthetic monitoring. Local field potentials acquired from the posterior hypothalamic region of a deep brain stimulation patient undergoing anaesthesia were analysed. The characterisation of features such as response delay, time-to-peak and modulation brevity are considered. PMID:21115043

  20. Optimal control for unknown discrete-time nonlinear Markov jump systems using adaptive dynamic programming.

    PubMed

    Zhong, Xiangnan; He, Haibo; Zhang, Huaguang; Wang, Zhanshan

    2014-12-01

    In this paper, we develop and analyze an optimal control method for a class of discrete-time nonlinear Markov jump systems (MJSs) with unknown system dynamics. Specifically, an identifier is established for the unknown systems to approximate system states, and an optimal control approach for nonlinear MJSs is developed to solve the Hamilton-Jacobi-Bellman equation based on the adaptive dynamic programming technique. We also develop detailed stability analysis of the control approach, including the convergence of the performance index function for nonlinear MJSs and the existence of the corresponding admissible control. Neural network techniques are used to approximate the proposed performance index function and the control law. To demonstrate the effectiveness of our approach, three simulation studies, one linear case, one nonlinear case, and one single link robot arm case, are used to validate the performance of the proposed optimal control method. PMID:25420238

  1. Logit Model based Performance Analysis of an Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Hernández, J. A.; Ospina, J. D.; Villada, D.

    2011-09-01

    In this paper, the performance of the Multi Dynamics Algorithm for Global Optimization (MAGO) is studied through simulation using five standard test functions. To guarantee that the algorithm converges to a global optimum, a set of experiments searching for the best combination between the only two MAGO parameters -number of iterations and number of potential solutions, are considered. These parameters are sequentially varied, while increasing the dimension of several test functions, and performance curves were obtained. The MAGO was originally designed to perform well with small populations; therefore, the self-adaptation task with small populations is more challenging while the problem dimension is higher. The results showed that the convergence probability to an optimal solution increases according to growing patterns of the number of iterations and the number of potential solutions. However, the success rates slow down when the dimension of the problem escalates. Logit Model is used to determine the mutual effects between the parameters of the algorithm.

  2. Optimal and adaptive control in canine postural regulation.

    PubMed

    Schuster, D; Talbott, R E

    1980-07-01

    For analytic purposes, dogs trained to stand quietly on an oscillating platform can be likened to a fixed-length inverted pendulum with a point mass. Describing function analysis permitted derivation of torque and error values as functions of phase and gain relative to platform movement. A phase criterion was determined for minimization of either control torque at a given error amplitude or error at a given control torque amplitude. Describing functions for dogs with and without vision approached optimal phase. Stretch reflex control involving proportional-plus-rate feedback is not sufficient to account for the approach to optimal phase. Blindfolded labyrinthectomized dogs did not exhibit optimal behavior and the phase constraint for stretch reflex control was satisfied at most frequencies. The observed behavior is best accounted for by a model involving both otolith and visual feedforward (pursuit-precognitive) control processes. Reductions in phase lag by blindfolded dogs during the first few cycles of platform motion provide evidence of adaptive control. PMID:7396044

  3. Performance optimization of digital VLSI circuits

    SciTech Connect

    Marple, D.P.

    1987-01-01

    Designers of digital VLSI circuits have virtually no computer tools available for the optimization of circuit performance. Instead, a designer relies extensively on circuit-analysis tools, such as circuit simulation (SPICE) and/or critical-delay-path analysis. A circuit-analysis approach to digital design is very labor-intensive and seldom produces a circuit with optimum area/delay or power/delay trade off. The goal of this research is to provide a synthesis approach to the design of digital circuits by finding the sizes of transistors that optimize circuits by finding the sizes of transistors that optimize circuit performance (delay, area, power). Solutions are found that are optimum for all possible delay paths of a given circuit and not for just a single path. The approach of this research is to formulate the problem of area/delay or power/delay optimization as a nonlinear program. Conditions for optimality are then established using graph theory and Kuhn-Tucker conditions. Finally, the use of augmented-Lagrangian and projected-Lagrangian algorithms are reviewed for the solution of the nonlinear programs. Two computer programs, PLATO and COP, were developed by the author to optimize CMOS PLA's (PLATO) and general CMOS circuits (COP). These tools provably find the globally optimum transistor sizes for a given circuit. Results are presented for PLA's and small- to medium-sized cells.

  4. Proficient brain for optimal performance: the MAP model perspective.

    PubMed

    Bertollo, Maurizio; di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio

    2016-01-01

    Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the "neural efficiency hypothesis." We also observed more ERD as related to optimal-controlled performance in conditions of "neural adaptability" and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques. PMID:27257557

  5. An adaptive ant colony system algorithm for continuous-space optimization problems.

    PubMed

    Li, Yan-jun; Wu, Tie-jun

    2003-01-01

    Ant colony algorithms comprise a novel category of evolutionary computation methods for optimization problems, especially for sequencing-type combinatorial optimization problems. An adaptive ant colony algorithm is proposed in this paper to tackle continuous-space optimization problems, using a new objective-function-based heuristic pheromone assignment approach for pheromone update to filtrate solution candidates. Global optimal solutions can be reached more rapidly by self-adjusting the path searching behaviors of the ants according to objective values. The performance of the proposed algorithm is compared with a basic ant colony algorithm and a Square Quadratic Programming approach in solving two benchmark problems with multiple extremes. The results indicated that the efficiency and reliability of the proposed algorithm were greatly improved. PMID:12656341

  6. Separator profile selection for optimal battery performance

    NASA Astrophysics Data System (ADS)

    Whear, J. Kevin

    Battery performance, depending on the application, is normally defined by power delivery, electrical capacity, cycling regime and life in service. In order to meet the various performance goals, the Battery Design Engineer can vary things such as grid alloys, paste formulations, number of plates and methods of construction. Another design option available to optimize the battery performance is the separator profile. The goal of this paper is to demonstrate how separator profile selection can be utilized to optimize battery performance and manufacturing efficiencies. Also time will be given to explore novel separator profiles which may bring even greater benefits in the future. All major lead-acid application will be considered including automotive, motive power and stationary.

  7. Neural network approach to continuous-time direct adaptive optimal control for partially unknown nonlinear systems.

    PubMed

    Vrabie, Draguna; Lewis, Frank

    2009-04-01

    In this paper we present in a continuous-time framework an online approach to direct adaptive optimal control with infinite horizon cost for nonlinear systems. The algorithm converges online to the optimal control solution without knowledge of the internal system dynamics. Closed-loop dynamic stability is guaranteed throughout. The algorithm is based on a reinforcement learning scheme, namely Policy Iterations, and makes use of neural networks, in an Actor/Critic structure, to parametrically represent the control policy and the performance of the control system. The two neural networks are trained to express the optimal controller and optimal cost function which describes the infinite horizon control performance. Convergence of the algorithm is proven under the realistic assumption that the two neural networks do not provide perfect representations for the nonlinear control and cost functions. The result is a hybrid control structure which involves a continuous-time controller and a supervisory adaptation structure which operates based on data sampled from the plant and from the continuous-time performance dynamics. Such control structure is unlike any standard form of controllers previously seen in the literature. Simulation results, obtained considering two second-order nonlinear systems, are provided. PMID:19362449

  8. Optimal performance of a quantum Otto refrigerator

    NASA Astrophysics Data System (ADS)

    Abah, Obinna; Lutz, Eric

    2016-03-01

    We consider a quantum Otto refrigerator cycle of a time-dependent harmonic oscillator. We investigate the coefficient of performance at maximum figure of merit for adiabatic and nonadiabatic frequency modulations. We obtain analytical expressions for the optimal performance both in the high-temperature (classical) regime and in the low-temperature (quantum) limit. We moreover analyze the breakdown of the cooling cycle for strongly nonadiabatic driving protocols and derive analytical estimates for the minimal driving time allowed for cooling.

  9. Self-Adaptive Stepsize Search Applied to Optimal Structural Design

    NASA Astrophysics Data System (ADS)

    Nolle, L.; Bland, J. A.

    Structural engineering often involves the design of space frames that are required to resist predefined external forces without exhibiting plastic deformation. The weight of the structure and hence the weight of its constituent members has to be as low as possible for economical reasons without violating any of the load constraints. Design spaces are usually vast and the computational costs for analyzing a single design are usually high. Therefore, not every possible design can be evaluated for real-world problems. In this work, a standard structural design problem, the 25-bar problem, has been solved using self-adaptive stepsize search (SASS), a relatively new search heuristic. This algorithm has only one control parameter and therefore overcomes the drawback of modern search heuristics, i.e. the need to first find a set of optimum control parameter settings for the problem at hand. In this work, SASS outperforms simulated-annealing, genetic algorithms, tabu search and ant colony optimization.

  10. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    PubMed Central

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  11. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  12. Automated Cache Performance Analysis And Optimization

    SciTech Connect

    Mohror, Kathryn

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  13. Sleep As A Strategy For Optimizing Performance.

    PubMed

    Yarnell, Angela M; Deuster, Patricia

    2016-01-01

    Recovery is an essential component of maintaining, sustaining, and optimizing cognitive and physical performance during and after demanding training and strenuous missions. Getting sufficient amounts of rest and sleep is key to recovery. This article focuses on sleep and discusses (1) why getting sufficient sleep is important, (2) how to optimize sleep, and (3) tools available to help maximize sleep-related performance. Insufficient sleep negatively impacts safety and readiness through reduced cognitive function, more accidents, and increased military friendly-fire incidents. Sufficient sleep is linked to better cognitive performance outcomes, increased vigor, and better physical and athletic performance as well as improved emotional and social functioning. Because Special Operations missions do not always allow for optimal rest or sleep, the impact of reduced rest and sleep on readiness and mission success should be minimized through appropriate preparation and planning. Preparation includes periods of "banking" or extending sleep opportunities before periods of loss, monitoring sleep by using tools like actigraphy to measure sleep and activity, assessing mental effectiveness, exploiting strategic sleep opportunities, and consuming caffeine at recommended doses to reduce fatigue during periods of loss. Together, these efforts may decrease the impact of sleep loss on mission and performance. PMID:27045502

  14. Active control of combustion for optimal performance

    SciTech Connect

    Jackson, M.D.; Agrawal, A.K.

    1999-07-01

    Combustion-zone stoichiometry and fuel-air premixing were actively controlled to optimize the combustor performance over a range of operating conditions. The objective was to maximize the combustion temperature, while maintaining NO{sub x} within a specified limit. The combustion system consisted of a premixer located coaxially near the inlet of a water-cooled shroud. The equivalence ratio was controlled by a variable-speed suction fan located downstream. The split between the premixing air and diffusion air was governed by the distance between the premixer and shroud. The combustor performance was characterized by a cost function evaluated from time-averaged measurements of NO{sub x} and oxygen concentrations in products. The cost function was minimized by downhill simplex algorithm employing closed-loop feedback. Experiments were conducted at different fuel flow rates to demonstrate that the controller optimized the performance without prior knowledge of the combustor behavior.

  15. Rear-heavy car control by adaptive linear optimal preview

    NASA Astrophysics Data System (ADS)

    Thommyppillai, M.; Evangelou, S.; Sharp, R. S.

    2010-05-01

    Adaptive linear optimal preview control theory is applied to a simple but non-linear car model, with parameters chosen to make the rear axle saturate first in any quasi-steady manoeuvre. The tendency of such a car to spin above a critical speed, which is a function of its running state, causes control to be especially difficult when operating near to the limit of the rear-axle force system. As in previous work, trim states and optimal gains are computed off-line for a given speed and a full range of lateral accelerations. Gain-scheduling with interpolation over trims and gain sets is used to keep the control appropriate to the running conditions, as they change. Simulations of manoeuvres are used to test and demonstrate the system capability. It is shown that utilising the rear-axle lateral-slip ratio as the scheduling variable, in the case of this rear-heavy car, gives excellent tracking, even when the tyres are run close to full saturation. It is implied by this and previous work that the general case can be treated effectively by monitoring both front- and rear-axle slips and scheduling on a worst-case basis.

  16. Performance of keck adaptive optics with sodium laser guide star

    SciTech Connect

    Gavel, D.T.; Olivier, S.; Brase, J.

    1996-03-08

    The Keck telescope adaptive optics system is designed to optimize performance in he 1 to 3 micron region of observation wavelengths (J, H, and K astronomical bands). The system uses a 249 degree of freedom deformable mirror, so that the interactuator spacing is 56 cm as mapped onto the 10 meter aperture. 56 cm is roughly equal to r0 at 1.4 microns, which implies the wavefront fitting error is 0.52 ({lambda}/2{pi})({ital d}/{ital r}{sub 0}){sup 5/6} = 118 nm rms. This is sufficient to produce a system Strehl of 0.74 at 1.4 microns if all other sources of error are negligible, which would be the case with a bright natural guidestar and very high control bandwidth. Other errors associated with the adaptive optics will however contribute to Strehl degradation, namely, servo bandwidth error due to inability to reject all temporal frequencies of the aberrated wavefront, wavefront measurement error due to finite signal-to-noise ratio in the wavefront sensor, and, in the case of a laser guidestar, the so-called cone effect where rays from the guidestar beacon fail to sample some of the upper atmosphere turbulence. Cone effect is mitigated considerably by the use of the very high altitude sodium laser guidestar (90 km altitude), as opposed to Rayleigh beacons at 20 km. However, considering the Keck telescope`s large aperture, this is still the dominating wavefront error contributor in the current adaptive optics system design.

  17. Optimized adaptation algorithm for HEVC/H.265 dynamic adaptive streaming over HTTP using variable segment duration

    NASA Astrophysics Data System (ADS)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2016-04-01

    Adaptive video streaming using HTTP has become popular in recent years for commercial video delivery. The recent MPEG-DASH standard allows interoperability and adaptability between servers and clients from different vendors. The delivery of the MPD (Media Presentation Description) files in DASH and the DASH client behaviours are beyond the scope of the DASH standard. However, the different adaptation algorithms employed by the clients do affect the overall performance of the system and users' QoE (Quality of Experience), hence the need for research in this field. Moreover, standard DASH delivery is based on fixed segments of the video. However, there is no standard segment duration for DASH where various fixed segment durations have been employed by different commercial solutions and researchers with their own individual merits. Most recently, the use of variable segment duration in DASH has emerged but only a few preliminary studies without practical implementation exist. In addition, such a technique requires a DASH client to be aware of segment duration variations, and this requirement and the corresponding implications on the DASH system design have not been investigated. This paper proposes a segment-duration-aware bandwidth estimation and next-segment selection adaptation strategy for DASH. Firstly, an MPD file extension scheme to support variable segment duration is proposed and implemented in a realistic hardware testbed. The scheme is tested on a DASH client, and the tests and analysis have led to an insight on the time to download next segment and the buffer behaviour when fetching and switching between segments of different playback durations. Issues like sustained buffering when switching between segments of different durations and slow response to changing network conditions are highlighted and investigated. An enhanced adaptation algorithm is then proposed to accurately estimate the bandwidth and precisely determine the time to download the next

  18. Beam orientation optimization for intensity modulated radiation therapy using adaptive l2,1-minimization

    NASA Astrophysics Data System (ADS)

    Jia, Xun; Men, Chunhua; Lou, Yifei; Jiang, Steve B.

    2011-10-01

    Beam orientation optimization (BOO) is a key component in the process of intensity modulated radiation therapy treatment planning. It determines to what degree one can achieve a good treatment plan in the subsequent plan optimization process. In this paper, we have developed a BOO algorithm via adaptive l2, 1-minimization. Specifically, we introduce a sparsity objective function term into our model which contains weighting factors for each beam angle adaptively adjusted during the optimization process. Such an objective function favors a small number of beam angles. By optimizing a total objective function consisting of a dosimetric term and the sparsity term, we are able to identify unimportant beam angles and gradually remove them without largely sacrificing the dosimetric objective. In one typical prostate case, the convergence property of our algorithm, as well as how beam angles are selected during the optimization process, is demonstrated. Fluence map optimization (FMO) is then performed based on the optimized beam angles. The resulting plan quality is presented and is found to be better than that of equiangular beam orientations. We have further systematically validated our algorithm in the contexts of 5-9 coplanar beams for five prostate cases and one head and neck case. For each case, the final FMO objective function value is used to compare the optimized beam orientations with the equiangular ones. It is found that, in the majority of cases tested, our BOO algorithm leads to beam configurations which attain lower FMO objective function values than those of corresponding equiangular cases, indicating the effectiveness of our BOO algorithm. Superior plan qualities are also demonstrated by comparing DVH curves between BOO plans and equiangular plans.

  19. Optimization and analysis of a CFJ-airfoil using adaptive meta-model based design optimization

    NASA Astrophysics Data System (ADS)

    Whitlock, Michael D.

    Although strong potential for Co-Flow Jet (CFJ) flow separation control system has been demonstrated in existing literature, there has been little effort applied towards the optimization of the design for a given application. The high dimensional design space makes any optimization computationally intensive. This work presents the optimization of a CFJ airfoil as applied to a low Reynolds Number regimen using meta-model based design optimization (MBDO). The approach consists of computational fluid dynamics (CFD) analysis coupled with a surrogate model derived using Kriging. A genetic algorithm (GA) is then used to perform optimization on the efficient surrogate model. MBDO was shown to be an effective and efficient approach to solving the CFJ design problem. The final solution set was found to decrease drag by 100% while increasing lift by 42%. When validated, the final solution was found to be within one standard deviation of the CFD model it was representing.

  20. Implementation and on-sky results of an optimal wavefront controller for the MMT NGS adaptive optics system

    NASA Astrophysics Data System (ADS)

    Powell, Keith B.; Vaitheeswaran, Vidhya

    2010-07-01

    The MMT observatory has recently implemented and tested an optimal wavefront controller for the NGS adaptive optics system. Open loop atmospheric data collected at the telescope is used as the input to a MATLAB based analytical model. The model uses nonlinear constrained minimization to determine controller gains and optimize the system performance. The real-time controller performing the adaptive optics close loop operation is implemented on a dedicated high performance PC based quad core server. The controller algorithm is written in C and uses the GNU scientific library for linear algebra. Tests at the MMT confirmed the optimal controller significantly reduced the residual RMS wavefront compared with the previous controller. Significant reductions in image FWHM and increased peak intensities were obtained in J, H and K-bands. The optimal PID controller is now operating as the baseline wavefront controller for the MMT NGS-AO system.

  1. Multi-objective parameter optimization of common land model using adaptive surrogate modelling

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Li, J.; Wang, C.; Di, Z.; Dai, Y.; Ye, A.; Miao, C.

    2014-06-01

    Parameter specification usually has significant influence on the performance of land surface models (LSMs). However, estimating the parameters properly is a challenging task due to the following reasons: (1) LSMs usually have too many adjustable parameters (20-100 or even more), leading to the curse of dimensionality in the parameter input space; (2) LSMs usually have many output variables involving water/energy/carbon cycles, so that calibrating LSMs is actually a multi-objective optimization problem; (3) regional LSMs are expensive to run, while conventional multi-objective optimization methods needs a huge number of model runs (typically 105~106). It makes parameter optimization computationally prohibitive. An uncertainty qualification framework was developed to meet the aforementioned challenges: (1) use parameter screening to reduce the number of adjustable parameters; (2) use surrogate models to emulate the response of dynamic models to the variation of adjustable parameters; (3) use an adaptive strategy to promote the efficiency of surrogate modeling based optimization; (4) use a weighting function to transfer multi-objective optimization to single objective optimization. In this study, we demonstrate the uncertainty quantification framework on a single column case study of a land surface model - Common Land Model (CoLM) and evaluate the effectiveness and efficiency of the proposed framework. The result indicated that this framework can achieve optimal parameter set using totally 411 model runs, and worth to be extended to other large complex dynamic models, such as regional land surface models, atmospheric models and climate models.

  2. Identification of robust adaptation gene regulatory network parameters using an improved particle swarm optimization algorithm.

    PubMed

    Huang, X N; Ren, H P

    2016-01-01

    Robust adaptation is a critical ability of gene regulatory network (GRN) to survive in a fluctuating environment, which represents the system responding to an input stimulus rapidly and then returning to its pre-stimulus steady state timely. In this paper, the GRN is modeled using the Michaelis-Menten rate equations, which are highly nonlinear differential equations containing 12 undetermined parameters. The robust adaption is quantitatively described by two conflicting indices. To identify the parameter sets in order to confer the GRNs with robust adaptation is a multi-variable, multi-objective, and multi-peak optimization problem, which is difficult to acquire satisfactory solutions especially high-quality solutions. A new best-neighbor particle swarm optimization algorithm is proposed to implement this task. The proposed algorithm employs a Latin hypercube sampling method to generate the initial population. The particle crossover operation and elitist preservation strategy are also used in the proposed algorithm. The simulation results revealed that the proposed algorithm could identify multiple solutions in one time running. Moreover, it demonstrated a superior performance as compared to the previous methods in the sense of detecting more high-quality solutions within an acceptable time. The proposed methodology, owing to its universality and simplicity, is useful for providing the guidance to design GRN with superior robust adaptation. PMID:27323043

  3. Adaptive Sampling of Spatiotemporal Phenomena with Optimization Criteria

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Thompson, David R.; Hsiang, Kian

    2013-01-01

    This work was designed to find a way to optimally (or near optimally) sample spatiotemporal phenomena based on limited sensing capability, and to create a model that can be run to estimate uncertainties, as well as to estimate covariances. The goal was to maximize (or minimize) some function of the overall uncertainty. The uncertainties and covariances were modeled presuming a parametric distribution, and then the model was used to approximate the overall information gain, and consequently, the objective function from each potential sense. These candidate sensings were then crosschecked against operation costs and feasibility. Consequently, an operations plan was derived that combined both operational constraints/costs and sensing gain. Probabilistic modeling was used to perform an approximate inversion of the model, which enabled calculation of sensing gains, and subsequent combination with operational costs. This incorporation of operations models to assess cost and feasibility for specific classes of vehicles is unique.

  4. Performance-Based Adaptive Fuzzy Tracking Control for Networked Industrial Processes.

    PubMed

    Wang, Tong; Qiu, Jianbin; Yin, Shen; Gao, Huijun; Fan, Jialu; Chai, Tianyou

    2016-08-01

    In this paper, the performance-based control design problem for double-layer networked industrial processes is investigated. At the device layer, the prescribed performance functions are first given to describe the output tracking performance, and then by using backstepping technique, new adaptive fuzzy controllers are designed to guarantee the tracking performance under the effects of input dead-zone and the constraint of prescribed tracking performance functions. At operation layer, by considering the stochastic disturbance, actual index value, target index value, and index prediction simultaneously, an adaptive inverse optimal controller in discrete-time form is designed to optimize the overall performance and stabilize the overall nonlinear system. Finally, a simulation example of continuous stirred tank reactor system is presented to show the effectiveness of the proposed control method. PMID:27168605

  5. Enhancing astronaut performance using sensorimotor adaptability training.

    PubMed

    Bloomberg, Jacob J; Peters, Brian T; Cohen, Helen S; Mulavara, Ajitkumar P

    2015-01-01

    Astronauts experience disturbances in balance and gait function when they return to Earth. The highly plastic human brain enables individuals to modify their behavior to match the prevailing environment. Subjects participating in specially designed variable sensory challenge training programs can enhance their ability to rapidly adapt to novel sensory situations. This is useful in our application because we aim to train astronauts to rapidly formulate effective strategies to cope with the balance and locomotor challenges associated with new gravitational environments-enhancing their ability to "learn to learn." We do this by coupling various combinations of sensorimotor challenges with treadmill walking. A unique training system has been developed that is comprised of a treadmill mounted on a motion base to produce movement of the support surface during walking. This system provides challenges to gait stability. Additional sensory variation and challenge are imposed with a virtual visual scene that presents subjects with various combinations of discordant visual information during treadmill walking. This experience allows them to practice resolving challenging and conflicting novel sensory information to improve their ability to adapt rapidly. Information obtained from this work will inform the design of the next generation of sensorimotor countermeasures for astronauts. PMID:26441561

  6. Enhancing astronaut performance using sensorimotor adaptability training

    PubMed Central

    Bloomberg, Jacob J.; Peters, Brian T.; Cohen, Helen S.; Mulavara, Ajitkumar P.

    2015-01-01

    Astronauts experience disturbances in balance and gait function when they return to Earth. The highly plastic human brain enables individuals to modify their behavior to match the prevailing environment. Subjects participating in specially designed variable sensory challenge training programs can enhance their ability to rapidly adapt to novel sensory situations. This is useful in our application because we aim to train astronauts to rapidly formulate effective strategies to cope with the balance and locomotor challenges associated with new gravitational environments—enhancing their ability to “learn to learn.” We do this by coupling various combinations of sensorimotor challenges with treadmill walking. A unique training system has been developed that is comprised of a treadmill mounted on a motion base to produce movement of the support surface during walking. This system provides challenges to gait stability. Additional sensory variation and challenge are imposed with a virtual visual scene that presents subjects with various combinations of discordant visual information during treadmill walking. This experience allows them to practice resolving challenging and conflicting novel sensory information to improve their ability to adapt rapidly. Information obtained from this work will inform the design of the next generation of sensorimotor countermeasures for astronauts. PMID:26441561

  7. Performance-optimized clinical IMRT planning on modern CPUs

    NASA Astrophysics Data System (ADS)

    Ziegenhein, Peter; Kamerling, Cornelis Ph; Bangert, Mark; Kunkel, Julian; Oelfke, Uwe

    2013-06-01

    Intensity modulated treatment plan optimization is a computationally expensive task. The feasibility of advanced applications in intensity modulated radiation therapy as every day treatment planning, frequent re-planning for adaptive radiation therapy and large-scale planning research severely depends on the runtime of the plan optimization implementation. Modern computational systems are built as parallel architectures to yield high performance. The use of GPUs, as one class of parallel systems, has become very popular in the field of medical physics. In contrast we utilize the multi-core central processing unit (CPU), which is the heart of every modern computer and does not have to be purchased additionally. In this work we present an ultra-fast, high precision implementation of the inverse plan optimization problem using a quasi-Newton method on pre-calculated dose influence data sets. We redefined the classical optimization algorithm to achieve a minimal runtime and high scalability on CPUs. Using the proposed methods in this work, a total plan optimization process can be carried out in only a few seconds on a low-cost CPU-based desktop computer at clinical resolution and quality. We have shown that our implementation uses the CPU hardware resources efficiently with runtimes comparable to GPU implementations, at lower costs.

  8. Morphology optimization for enhanced performance in organic photovoltaics

    NASA Astrophysics Data System (ADS)

    Wodo, Olga; Zola, Jaroslaw; Ganapathysubramanian, Baskar

    2015-03-01

    Organic solar cells have the potential for widespread usage due to their low cost-per-watt and mechanical flexibility. Their wide spread use, however, is bottlenecked primarily by their low solar efficiencies. Experimental evidence suggests that a key property determining the solar efficiency of such devices is the final morphological distribution of the electron-donor and electron-acceptor constituents. By carefully designing the morphology of the device, one could potentially significantly enhance their performance. This is an area of intense experimental effort that is mostly trial-and-error based, and serves as a fertile area for introducing mechanics and computational thinking. In this work, we use optimization techniques coupled with computational modeling to identify the optimal structures for high efficiency solar cells. In particular, we use adaptive population-based incremental learning method linked to graph-based surrogate model to evaluate properties for given structure. We study several different criterions and find optimal structure that that improve the performance of currently hypothesized optimal structures by 29%.

  9. ATLAS offline software performance monitoring and optimization

    NASA Astrophysics Data System (ADS)

    Chauhan, N.; Kabra, G.; Kittelmann, T.; Langenberg, R.; Mandrysch, R.; Salzburger, A.; Seuster, R.; Ritsch, E.; Stewart, G.; van Eldik, N.; Vitillo, R.; Atlas Collaboration

    2014-06-01

    In a complex multi-developer, multi-package software environment, such as the ATLAS offline framework Athena, tracking the performance of the code can be a non-trivial task in itself. In this paper we describe improvements in the instrumentation of ATLAS offline software that have given considerable insight into the performance of the code and helped to guide the optimization work. The first tool we used to instrument the code is PAPI, which is a programing interface for accessing hardware performance counters. PAPI events can count floating point operations, cycles, instructions and cache accesses. Triggering PAPI to start/stop counting for each algorithm and processed event results in a good understanding of the algorithm level performance of ATLAS code. Further data can be obtained using Pin, a dynamic binary instrumentation tool. Pin tools can be used to obtain similar statistics as PAPI, but advantageously without requiring recompilation of the code. Fine grained routine and instruction level instrumentation is also possible. Pin tools can additionally interrogate the arguments to functions, like those in linear algebra libraries, so that a detailed usage profile can be obtained. These tools have characterized the extensive use of vector and matrix operations in ATLAS tracking. Currently, CLHEP is used here, which is not an optimal choice. To help evaluate replacement libraries a testbed has been setup allowing comparison of the performance of different linear algebra libraries (including CLHEP, Eigen and SMatrix/SVector). Results are then presented via the ATLAS Performance Management Board framework, which runs daily with the current development branch of the code and monitors reconstruction and Monte-Carlo jobs. This framework analyses the CPU and memory performance of algorithms and an overview of results are presented on a web page. These tools have provided the insight necessary to plan and implement performance enhancements in ATLAS code by identifying

  10. PSO-based multiobjective optimization with dynamic population size and adaptive local archives.

    PubMed

    Leong, Wen-Fung; Yen, Gary G

    2008-10-01

    Recently, various multiobjective particle swarm optimization (MOPSO) algorithms have been developed to efficiently and effectively solve multiobjective optimization problems. However, the existing MOPSO designs generally adopt a notion to "estimate" a fixed population size sufficiently to explore the search space without incurring excessive computational complexity. To address the issue, this paper proposes the integration of a dynamic population strategy within the multiple-swarm MOPSO. The proposed algorithm is named dynamic population multiple-swarm MOPSO. An additional feature, adaptive local archives, is designed to improve the diversity within each swarm. Performance metrics and benchmark test functions are used to examine the performance of the proposed algorithm compared with that of five selected MOPSOs and two selected multiobjective evolutionary algorithms. In addition, the computational cost of the proposed algorithm is quantified and compared with that of the selected MOPSOs. The proposed algorithm shows competitive results with improved diversity and convergence and demands less computational cost. PMID:18784011

  11. Adaptive Portfolio Optimization for Multiple Electricity Markets Participation.

    PubMed

    Pinto, Tiago; Morais, Hugo; Sousa, Tiago M; Sousa, Tiago; Vale, Zita; Praca, Isabel; Faia, Ricardo; Pires, Eduardo Jose Solteiro

    2016-08-01

    The increase of distributed energy resources, mainly based on renewable sources, requires new solutions that are able to deal with this type of resources' particular characteristics (namely, the renewable energy sources intermittent nature). The smart grid concept is increasing its consensus as the most suitable solution to facilitate the small players' participation in electric power negotiations while improving energy efficiency. The opportunity for players' participation in multiple energy negotiation environments (smart grid negotiation in addition to the already implemented market types, such as day-ahead spot markets, balancing markets, intraday negotiations, bilateral contracts, forward and futures negotiations, and among other) requires players to take suitable decisions on whether to, and how to participate in each market type. This paper proposes a portfolio optimization methodology, which provides the best investment profile for a market player, considering different market opportunities. The amount of power that each supported player should negotiate in each available market type in order to maximize its profits, considers the prices that are expected to be achieved in each market, in different contexts. The price forecasts are performed using artificial neural networks, providing a specific database with the expected prices in the different market types, at each time. This database is then used as input by an evolutionary particle swarm optimization process, which originates the most advantage participation portfolio for the market player. The proposed approach is tested and validated with simulations performed in multiagent simulator of competitive electricity markets, using real electricity markets data from the Iberian operator-MIBEL. PMID:26353382

  12. On the optimal reconstruction and control of adaptive optical systems with mirror dynamics.

    PubMed

    Correia, Carlos; Raynaud, Henri-François; Kulcsár, Caroline; Conan, Jean-Marc

    2010-02-01

    In adaptive optics (AO) the deformable mirror (DM) dynamics are usually neglected because, in general, the DM can be considered infinitely fast. Such assumption may no longer apply for the upcoming Extremely Large Telescopes (ELTs) with DM that are several meters in diameter with slow and/or resonant responses. For such systems an important challenge is to design an optimal regulator minimizing the variance of the residual phase. In this contribution, the general optimal minimum-variance (MV) solution to the full dynamical reconstruction and control problem of AO systems (AOSs) is established. It can be looked upon as the parent solution from which simpler (used hitherto) suboptimal solutions can be derived as special cases. These include either partial DM-dynamics-free solutions or solutions derived from the static minimum-variance reconstruction (where both atmospheric disturbance and DM dynamics are neglected altogether). Based on a continuous stochastic model of the disturbance, a state-space approach is developed that yields a fully optimal MV solution in the form of a discrete-time linear-quadratic-Gaussian (LQG) regulator design. From this LQG standpoint, the control-oriented state-space model allows one to (1) derive the optimal state-feedback linear regulator and (2) evaluate the performance of both the optimal and the sub-optimal solutions. Performance results are given for weakly damped second-order oscillatory DMs with large-amplitude resonant responses, in conditions representative of an ELT AO system. The highly energetic optical disturbance caused on the tip/tilt (TT) modes by the wind buffeting is considered. Results show that resonant responses are correctly handled with the MV regulator developed here. The use of sub-optimal regulators results in prohibitive performance losses in terms of residual variance; in addition, the closed-loop system may become unstable for resonant frequencies in the range of interest. PMID:20126246

  13. Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2012-01-01

    This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.

  14. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  15. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  16. Adaptable Learning Pathway Generation with Ant Colony Optimization

    ERIC Educational Resources Information Center

    Wong, Lung-Hsiang; Looi, Chee-Kit

    2009-01-01

    One of the new major directions in research on web-based educational systems is the notion of adaptability: the educational system adapts itself to the learning profile, preferences and ability of the student. In this paper, we look into the issues of providing adaptability with respect to learning pathways. We explore the state of the art with…

  17. Integrated optimal allocation model for complex adaptive system of water resources management (I): Methodologies

    NASA Astrophysics Data System (ADS)

    Zhou, Yanlai; Guo, Shenglian; Xu, Chong-Yu; Liu, Dedi; Chen, Lu; Ye, Yushi

    2015-12-01

    Due to the adaption, dynamic and multi-objective characteristics of complex water resources system, it is a considerable challenge to manage water resources in an efficient, equitable and sustainable way. An integrated optimal allocation model is proposed for complex adaptive system of water resources management. The model consists of three modules: (1) an agent-based module for revealing evolution mechanism of complex adaptive system using agent-based, system dynamic and non-dominated sorting genetic algorithm II methods, (2) an optimal module for deriving decision set of water resources allocation using multi-objective genetic algorithm, and (3) a multi-objective evaluation module for evaluating the efficiency of the optimal module and selecting the optimal water resources allocation scheme using project pursuit method. This study has provided a theoretical framework for adaptive allocation, dynamic allocation and multi-objective optimization for a complex adaptive system of water resources management.

  18. SSD-Optimized Workload Placement with Adaptive Learning and Classification in HPC Environments

    SciTech Connect

    Wan, Lipeng; Lu, Zheng; Cao, Qing; Wang, Feiyi; Oral, H Sarp; Settlemyer, Bradley W

    2014-01-01

    In recent years, non-volatile memory devices such as SSD drives have emerged as a viable storage solution due to their increasing capacity and decreasing cost. Due to the unique capability and capacity requirements in large scale HPC (High Performance Computing) storage environment, a hybrid config- uration (SSD and HDD) may represent one of the most available and balanced solutions considering the cost and performance. Under this setting, effective data placement as well as movement with controlled overhead become a pressing challenge. In this paper, we propose an integrated object placement and movement framework and adaptive learning algorithms to address these issues. Specifically, we present a method that shuffle data objects across storage tiers to optimize the data access performance. The method also integrates an adaptive learning algorithm where real- time classification is employed to predict the popularity of data object accesses, so that they can be placed on, or migrate between SSD or HDD drives in the most efficient manner. We discuss preliminary results based on this approach using a simulator we developed to show that the proposed methods can dynamically adapt storage placements and access pattern as workloads evolve to achieve the best system level performance such as throughput.

  19. Maximal exercise performance after adaptation to microgravity.

    PubMed

    Levine, B D; Lane, L D; Watenpaugh, D E; Gaffney, F A; Buckey, J C; Blomqvist, C G

    1996-08-01

    The cardiovascular system appears to adapt well to microgravity but is compromised on reestablishment of gravitational forces leading to orthostatic intolerance and a reduction in work capacity. However, maximal systemic oxygen uptake (Vo2) and transport, which may be viewed as a measure of the functional integrity of the cardiovascular system and its regulatory mechanisms, has not been systematically measured in space or immediately after return to Earth after spaceflight. We studied six astronauts (4 men and 2 women, age 35-50 yr) before, during, and immediately after 9 or 14 days of microgravity on two Spacelab Life Sciences flights (SLS-1 and SLS-2). Peak Vo2 (Vo2peak) was measured with an incremental protocol on a cycle ergometer after prolonged submaximal exercise at 30 and 60% of Vo2peak. We measured gas fractions by mass spectrometer and ventilation via turbine flowmeter for the calculation of breath-by-breath Vo2, heart rate via electrocardiogram, and cardiac output (Qc) via carbon dioxide rebreathing. Peak power and Vo2 were well maintained during spaceflight and not significantly different compared with 2 wk preflight. Vo2peak was reduced by 22% immediately postflight (P < 0.05), entirely because of a decrease in peak stroke volume and Qc. Peak heart rate, blood pressure, and systemic arteriovenous oxygen difference were unchanged. We conclude that systemic Vo2peak is well maintained in the absence of gravity for 9-14 days but is significantly reduced immediately on return to Earth, most likely because of reduced intravascular blood volume, stroke volume, and Qc. PMID:8872635

  20. Asymptotic Linearity of Optimal Control Modification Adaptive Law with Analytical Stability Margins

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2010-01-01

    Optimal control modification has been developed to improve robustness to model-reference adaptive control. For systems with linear matched uncertainty, optimal control modification adaptive law can be shown by a singular perturbation argument to possess an outer solution that exhibits a linear asymptotic property. Analytical expressions of phase and time delay margins for the outer solution can be obtained. Using the gradient projection operator, a free design parameter of the adaptive law can be selected to satisfy stability margins.

  1. MACAO-VLTI adaptive optics systems performance

    NASA Astrophysics Data System (ADS)

    Arsenault, Robin; Donaldson, Rob; Dupuy, Christophe; Fedrigo, Enrico; Hubin, Norbert N.; Ivanescu, Liviu; Kasper, Markus E.; Oberti, Sylvain; Paufique, Jerome; Rossi, Silvio; Silber, Armin; Delabre, Bernhard; Lizon, Jean-Louis; Gigan, Pierre

    2004-10-01

    In April and August "03 two MACAO-VLTI curvature AO systems were installed on the VLT telescopes unit 2 and 3 in Paranal (Chile). These are 60 element systems using a 150mm bimorph deformable mirror and 60 APD"s as WFS detectors. Valuable integration & commissioning experience has been gained during these 2 missions. Several tests have been performed in order to evaluate system performance on the sky. The systems have proven to be extremely robust, performing in a stable fashion in extreme seeing condition (seeing up to 3"). Strehl ratio of 0.65 and residual tilt smaller than 10 mas have been obtained on the sky in 0.8" seeing condition. Weak guide source performance is also excellent with a strehl of 0.26 on a V~16 magnitude star. Several functionalities have been successfully tested including: chopping, off-axis guiding, atmospheric refraction compensation etc. The AO system can be used in a totally automatic fashion with a small overhead: the AO loop can be closed on the target less than 60 sec after star acquisition by the telescope. It includes reading the seeing value given by the site monitor, evaluate the guide star magnitude (cycling through neutral density filters) setting the close-loop AO parameters (system gain and vibrating membrane mirror stroke) including calculation of the command-matrix. The last 2 systems will be installed in August "04 and in the course of 2005.

  2. Spaceborne multiview image compression based on adaptive disparity compensation with rate-distortion optimization

    NASA Astrophysics Data System (ADS)

    Li, Shigao; Su, Kehua; Jia, Liming

    2016-01-01

    Disparity compensation (DC) and transform coding are incorporated into a hybrid coding to reduce the code-rate of multiview images. However, occlusion and inaccurate disparity estimations (DE) impair the performance of DC, especially in spaceborne images. This paper proposes an adaptive disparity-compensation scheme for the compression of spaceborne multiview images, including stereo image pairs and three-line-scanner images. DC with adaptive loop filter is used to remove redundancy between reference images and target images and a wavelet-based coding method is used to encode reference images and residue images. In occlusion regions, the DC efficiency may be poor because no interview correlation exists. A rate-distortion optimization method is thus designed to select the best prediction mode for local regions. Experimental results show that the proposed scheme can provide significant coding gain compared with some other similar coding schemes, and the time complexity is also competitive.

  3. Optimizing the Adaptive Stochastic Resonance and Its Application in Fault Diagnosis

    NASA Astrophysics Data System (ADS)

    Liu, Xiaole; Yang, Jianhua; Liu, Houguang; Cheng, Gang; Chen, Xihui; Xu, Dan

    2015-10-01

    This paper presents an adaptive stochastic resonance method based on the improved artificial fish swarm algorithm. By this method, we can enhance the weak characteristic signal which is submerged in a heavy noise. We can also adaptively lead the stochastic resonance to be optimized to the greatest extent. The effectiveness of the proposed method is verified by both numerical simulation and lab experimental vibration signals including normal, a chipped tooth and a missing tooth of planetary gearboxes under the loaded condition. Both theoretical and experimental results show that this method can effectively extract weak characteristics in a heavy noise. In the experiment, each weak fault feature is extracted successfully from the fault planetary gear. When compared with the ensemble empirical mode decomposition (EEMD) method, the method proposed in this paper has been found to give remarkable performance.

  4. Performance evaluation of a sensorless adaptive optics multiphoton microscope.

    PubMed

    Skorsetz, Martin; Artal, Pablo; Bueno, Juan M

    2016-03-01

    A wavefront sensorless adaptive optics technique was combined with a custom-made multiphoton microscope to correct for specimen-induced aberrations. A liquid-crystal-on-silicon (LCoS) modulator was used to systematically generate Zernike modes during image recording. The performance of the instrument was evaluated in samples providing different nonlinear signals and the benefit of correcting higher order aberrations was always noticeable (in both contrast and resolution). The optimum aberration pattern was stable in time for the samples here involved. For a particular depth location within the sample, the wavefront to be precompensated was independent on the size of the imaged area (up to ∼ 360 × 360 μm(2)). The mode combination optimizing the recorded image depended on the Zernike correction control sequence; however, the final images hardly differed. At deeper locations, a noticeable dominance of spherical aberration was found. The influence of other aberration terms was also compared to the effect of the spherical aberration. PMID:26469361

  5. Adaptive Virtual Reality Training to Optimize Military Medical Skills Acquisition and Retention.

    PubMed

    Siu, Ka-Chun; Best, Bradley J; Kim, Jong Wook; Oleynikov, Dmitry; Ritter, Frank E

    2016-05-01

    The Department of Defense has pursued the integration of virtual reality simulation into medical training and applications to fulfill the need to train 100,000 military health care personnel annually. Medical personnel transitions, both when entering an operational area and returning to the civilian theater, are characterized by the need to rapidly reacquire skills that are essential but have decayed through disuse or infrequent use. Improved efficiency in reacquiring such skills is critical to avoid the likelihood of mistakes that may result in mortality and morbidity. We focus here on a study testing a theory of how the skills required for minimally invasive surgery for military surgeons are learned and retained. Our adaptive virtual reality surgical training system will incorporate an intelligent mechanism for tracking performance that will recognize skill deficiencies and generate an optimal adaptive training schedule. Our design is modeling skill acquisition based on a skill retention theory. The complexity of appropriate training tasks is adjusted according to the level of retention and/or surgical experience. Based on preliminary work, our system will improve the capability to interactively assess the level of skills learning and decay, optimizes skill relearning across levels of surgical experience, and positively impact skill maintenance. Our system could eventually reduce mortality and morbidity by providing trainees with the reexperience they need to help make a transition between operating theaters. This article reports some data that will support adaptive tutoring of minimally invasive surgery and similar surgical skills. PMID:27168575

  6. Adaptive sequentially space-filling metamodeling applied in optimal water quantity allocation at basin scale

    NASA Astrophysics Data System (ADS)

    Mousavi, S. Jamshid; Shourian, M.

    2010-03-01

    Global optimization models in many problems suffer from high computational costs due to the need for performing high-fidelity simulation models for objective function evaluations. Metamodeling is a useful approach to dealing with this problem in which a fast surrogate model replaces the detailed simulation model. However, training of the surrogate model needs enough input-output data which in case of absence of observed data, each of them must be obtained by running the simulation model and may still cause computational difficulties. In this paper a new metamodeling approach called adaptive sequentially space filling (ASSF) is presented by which the regions in the search space that need more training data are sequentially identified and the process of design of experiments is performed adaptively. Performance of the ASSF approach is tested against a benchmark function optimization problem and optimum basin-scale water allocation problems, in which the MODSIM river basin decision support system is approximated. Results show the ASSF model with fewer actual function evaluations is able to find comparable solutions to other metamodeling techniques using random sampling and evolution control strategies.

  7. Flight Test of an Adaptive Configuration Optimization System for Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn B.; Georgie, Jennifer; Barnicki, Joseph S.

    1999-01-01

    A NASA Dryden Flight Research Center program explores the practical application of real-time adaptive configuration optimization for enhanced transport performance on an L-1011 aircraft. This approach is based on calculation of incremental drag from forced-response, symmetric, outboard aileron maneuvers. In real-time operation, the symmetric outboard aileron deflection is directly optimized, and the horizontal stabilator and angle of attack are indirectly optimized. A flight experiment has been conducted from an onboard research engineering test station, and flight research results are presented herein. The optimization system has demonstrated the capability of determining the minimum drag configuration of the aircraft in real time. The drag-minimization algorithm is capable of identifying drag to approximately a one-drag-count level. Optimizing the symmetric outboard aileron position realizes a drag reduction of 2-3 drag counts (approximately 1 percent). Algorithm analysis of maneuvers indicate that two-sided raised-cosine maneuvers improve definition of the symmetric outboard aileron drag effect, thereby improving analysis results and consistency. Ramp maneuvers provide a more even distribution of data collection as a function of excitation deflection than raised-cosine maneuvers provide. A commercial operational system would require airdata calculations and normal output of current inertial navigation systems; engine pressure ratio measurements would be optional.

  8. Finite-horizon control-constrained nonlinear optimal control using single network adaptive critics.

    PubMed

    Heydari, Ali; Balakrishnan, Sivasubramanya N

    2013-01-01

    To synthesize fixed-final-time control-constrained optimal controllers for discrete-time nonlinear control-affine systems, a single neural network (NN)-based controller called the Finite-horizon Single Network Adaptive Critic is developed in this paper. Inputs to the NN are the current system states and the time-to-go, and the network outputs are the costates that are used to compute optimal feedback control. Control constraints are handled through a nonquadratic cost function. Convergence proofs of: 1) the reinforcement learning-based training method to the optimal solution; 2) the training error; and 3) the network weights are provided. The resulting controller is shown to solve the associated time-varying Hamilton-Jacobi-Bellman equation and provide the fixed-final-time optimal solution. Performance of the new synthesis technique is demonstrated through different examples including an attitude control problem wherein a rigid spacecraft performs a finite-time attitude maneuver subject to control bounds. The new formulation has great potential for implementation since it consists of only one NN with single set of weights and it provides comprehensive feedback solutions online, though it is trained offline. PMID:24808214

  9. Optimizing digital 8mm drive performance

    NASA Technical Reports Server (NTRS)

    Schadegg, Gerry

    1993-01-01

    The experience of attaching over 350,000 digital 8mm drives to 85-plus system platforms has uncovered many factors which can reduce cartridge capacity or drive throughput, reduce reliability, affect cartridge archivability and actually shorten drive life. Some are unique to an installation. Others result from how the system is set up to talk to the drive. Many stem from how applications use the drive, the work load that's present, the kind of media used and, very important, the kind of cleaning program in place. Digital 8mm drives record data at densities that rival those of disk technology. Even with technology this advanced, they are extremely robust and, given proper usage, care and media, should reward the user with a long productive life. The 8mm drive will give its best performance using high-quality 'data grade' media. Even though it costs more, good 'data grade' media can sustain the reliability and rigorous needs of a data storage environment and, with proper care, give users an archival life of 30 years or more. Various factors, taken individually, may not necessarily produce performance or reliability problems. Taken in combination, their effects can compound, resulting in rapid reductions in a drive's serviceable life, cartridge capacity, or drive performance. The key to managing media is determining the importance one places upon their recorded data and, subsequently, setting media usage guidelines that can deliver data reliability. Various options one can implement to optimize digital 8mm drive performance are explored.

  10. Adaptive function allocation reduces performance costs of static automation

    NASA Technical Reports Server (NTRS)

    Parasuraman, Raja; Mouloua, Mustapha; Molloy, Robert; Hilburn, Brian

    1993-01-01

    Adaptive automation offers the option of flexible function allocation between the pilot and on-board computer systems. One of the important claims for the superiority of adaptive over static automation is that such systems do not suffer from some of the drawbacks associated with conventional function allocation. Several experiments designed to test this claim are reported in this article. The efficacy of adaptive function allocation was examined using a laboratory flight-simulation task involving multiple functions of tracking, fuel-management, and systems monitoring. The results show that monitoring inefficiency represents one of the performance costs of static automation. Adaptive function allocation can reduce the performance cost associated with long-term static automation.

  11. Monitoring the Performance of a Neuro-Adaptive Controller

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Gupta, Pramod

    2004-01-01

    Traditional control has proven to be ineffective to deal with catastrophic changes or slow degradation of complex, highly nonlinear systems like aircraft or spacecraft, robotics, or flexible manufacturing systems. Control systems which can adapt toward changes in the plant have been proposed as they offer many advantages (e.g., better performance, controllability of aircraft despite of a damaged wing). In the last few years, use of neural networks in adaptive controllers (neuro-adaptive control) has been studied actively. Neural networks of various architectures have been used successfully for online learning adaptive controllers. In such a typical control architecture, the neural network receives as an input the current deviation between desired and actual plant behavior and, by on-line training, tries to minimize this discrepancy (e.g.; by producing a control augmentation signal). Even though neuro-adaptive controllers offer many advantages, they have not been used in mission- or safety-critical applications, because performance and safety guarantees cannot b e provided at development time-a major prerequisite for safety certification (e.g., by the FAA or NASA). Verification and Validation (V&V) of an adaptive controller requires the development of new analysis techniques which can demonstrate that the control system behaves safely under all operating conditions. Because of the requirement to adapt toward unforeseen changes during operation, i.e., in real time, design-time V&V is not sufficient.

  12. Adaptive dynamic programming for finite-horizon optimal control of discrete-time nonlinear systems with ε-error bound.

    PubMed

    Wang, Fei-Yue; Jin, Ning; Liu, Derong; Wei, Qinglai

    2011-01-01

    In this paper, we study the finite-horizon optimal control problem for discrete-time nonlinear systems using the adaptive dynamic programming (ADP) approach. The idea is to use an iterative ADP algorithm to obtain the optimal control law which makes the performance index function close to the greatest lower bound of all performance indices within an ε-error bound. The optimal number of control steps can also be obtained by the proposed ADP algorithms. A convergence analysis of the proposed ADP algorithms in terms of performance index function and control policy is made. In order to facilitate the implementation of the iterative ADP algorithms, neural networks are used for approximating the performance index function, computing the optimal control policy, and modeling the nonlinear system. Finally, two simulation examples are employed to illustrate the applicability of the proposed method. PMID:20876014

  13. Adaptive coupling optimized spiking coherence and synchronization in Newman-Watts neuronal networks

    NASA Astrophysics Data System (ADS)

    Gong, Yubing; Xu, Bo; Wu, Ya'nan

    2013-09-01

    In this paper, we have numerically studied the effect of adaptive coupling on the temporal coherence and synchronization of spiking activity in Newman-Watts Hodgkin-Huxley neuronal networks. It is found that random shortcuts can enhance the spiking synchronization more rapidly when the increment speed of adaptive coupling is increased and can optimize the temporal coherence of spikes only when the increment speed of adaptive coupling is appropriate. It is also found that adaptive coupling strength can enhance the synchronization of spikes and can optimize the temporal coherence of spikes when random shortcuts are appropriate. These results show that adaptive coupling has a big influence on random shortcuts related spiking activity and can enhance and optimize the temporal coherence and synchronization of spiking activity of the network. These findings can help better understand the roles of adaptive coupling for improving the information processing and transmission in neural systems.

  14. Enhancing Functional Performance using Sensorimotor Adaptability Training Programs

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Mulavara, A. P.; Peters, B. T.; Brady, R.; Audas, C.; Ruttley, T. M.; Cohen, H. S.

    2009-01-01

    During the acute phase of adaptation to novel gravitational environments, sensorimotor disturbances have the potential to disrupt the ability of astronauts to perform functional tasks. The goal of this project is to develop a sensorimotor adaptability (SA) training program designed to facilitate recovery of functional capabilities when astronauts transition to different gravitational environments. The project conducted a series of studies that investigated the efficacy of treadmill training combined with a variety of sensory challenges designed to increase adaptability including alterations in visual flow, body loading, and support surface stability.

  15. Optical Design and Optimization of Translational Reflective Adaptive Optics Ophthalmoscopes

    NASA Astrophysics Data System (ADS)

    Sulai, Yusufu N. B.

    The retina serves as the primary detector for the biological camera that is the eye. It is composed of numerous classes of neurons and support cells that work together to capture and process an image formed by the eye's optics, which is then transmitted to the brain. Loss of sight due to retinal or neuro-ophthalmic disease can prove devastating to one's quality of life, and the ability to examine the retina in vivo is invaluable in the early detection and monitoring of such diseases. Adaptive optics (AO) ophthalmoscopy is a promising diagnostic tool in early stages of development, still facing significant challenges before it can become a clinical tool. The work in this thesis is a collection of projects with the overarching goal of broadening the scope and applicability of this technology. We begin by providing an optical design approach for AO ophthalmoscopes that reduces the aberrations that degrade the performance of the AO correction. Next, we demonstrate how to further improve image resolution through the use of amplitude pupil apodization and non-common path aberration correction. This is followed by the development of a viewfinder which provides a larger field of view for retinal navigation. Finally, we conclude with the development of an innovative non-confocal light detection scheme which improves the non-invasive visualization of retinal vasculature and reveals the cone photoreceptor inner segments in healthy and diseased eyes.

  16. Modeling-Error-Driven Performance-Seeking Direct Adaptive Control

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V.; Kaneshige, John; Krishnakumar, Kalmanje; Burken, John

    2008-01-01

    This paper presents a stable discrete-time adaptive law that targets modeling errors in a direct adaptive control framework. The update law was developed in our previous work for the adaptive disturbance rejection application. The approach is based on the philosophy that without modeling errors, the original control design has been tuned to achieve the desired performance. The adaptive control should, therefore, work towards getting this performance even in the face of modeling uncertainties/errors. In this work, the baseline controller uses dynamic inversion with proportional-integral augmentation. Dynamic inversion is carried out using the assumed system model. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to the dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. Contrary to the typical Lyapunov-based adaptive approaches that guarantee only stability, the current approach investigates conditions for stability as well as performance. A high-fidelity F-15 model is used to illustrate the overall approach.

  17. Multi-objective parameter optimization of common land model using adaptive surrogate modeling

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Li, J.; Wang, C.; Di, Z.; Dai, Y.; Ye, A.; Miao, C.

    2015-05-01

    Parameter specification usually has significant influence on the performance of land surface models (LSMs). However, estimating the parameters properly is a challenging task due to the following reasons: (1) LSMs usually have too many adjustable parameters (20 to 100 or even more), leading to the curse of dimensionality in the parameter input space; (2) LSMs usually have many output variables involving water/energy/carbon cycles, so that calibrating LSMs is actually a multi-objective optimization problem; (3) Regional LSMs are expensive to run, while conventional multi-objective optimization methods need a large number of model runs (typically ~105-106). It makes parameter optimization computationally prohibitive. An uncertainty quantification framework was developed to meet the aforementioned challenges, which include the following steps: (1) using parameter screening to reduce the number of adjustable parameters, (2) using surrogate models to emulate the responses of dynamic models to the variation of adjustable parameters, (3) using an adaptive strategy to improve the efficiency of surrogate modeling-based optimization; (4) using a weighting function to transfer multi-objective optimization to single-objective optimization. In this study, we demonstrate the uncertainty quantification framework on a single column application of a LSM - the Common Land Model (CoLM), and evaluate the effectiveness and efficiency of the proposed framework. The result indicate that this framework can efficiently achieve optimal parameters in a more effective way. Moreover, this result implies the possibility of calibrating other large complex dynamic models, such as regional-scale LSMs, atmospheric models and climate models.

  18. Research on web performance optimization principles and models

    NASA Astrophysics Data System (ADS)

    Wang, Xin

    2013-03-01

    The Internet high speed development, causes Web the optimized question to be getting more and more prominent, therefore the Web performance optimizes into inevitably. the first principle of Web Performance Optimization is to understand, to know that income will have to pay, and return is diminishing; Simultaneously the probability will decrease Web the performance, and will start from the highest level to optimize obtained biggest. Web Technical models to improve the performance are: sharing costs, high-speed caching, profiles, parallel processing, simplified treatment. Based on this study, given the crucial Web performance optimization recommendations, which improve the performance of Web usage, accelerate the efficient use of Internet has an important significance.

  19. Skeletal muscle adaptations and muscle genomics of performance horses.

    PubMed

    Rivero, José-Luis L; Hill, Emmeline W

    2016-03-01

    Skeletal muscles in horses are characterised by specific adaptations, which are the result of the natural evolution of the horse as a grazing animal, centuries of selective breeding and the adaptability of this tissue in response to training. These adaptations include an increased muscle mass relative to body weight, a great locomotor efficiency based upon an admirable muscle-tendon architectural design and an adaptable fibre-type composition with intrinsic shortening velocities greater than would be predicted from an animal of comparable body size. Furthermore, equine skeletal muscles have a high mitochondrial volume that permits a higher whole animal aerobic capacity, as well as large intramuscular stores of energy substrates (glycogen in particular). Finally, high buffer and lactate transport capacities preserve muscles against fatigue during anaerobic exercise. Many of these adaptations can improve with training. The publication of the equine genome sequence in 2009 has provided a major advance towards an improved understanding of equine muscle physiology. Equine muscle genomics studies have revealed a number of genes associated with elite physical performance and have also identified changes in structural and metabolic genes following exercise and training. Genes involved in muscle growth, muscle contraction and specific metabolic pathways have been found to be functionally relevant for the early performance evaluation of elite athletic horses. The candidate genes discussed in this review are important for a healthy individual to improve performance. However, muscle performance limiting conditions are widespread in horses and many of these conditions are also genetically influenced. PMID:26831154

  20. The 15-meter antenna performance optimization using an interdisciplinary approach

    NASA Technical Reports Server (NTRS)

    Grantham, William L.; Schroeder, Lyle C.; Bailey, Marion C.; Campbell, Thomas G.

    1988-01-01

    A 15-meter diameter deployable antenna has been built and is being used as an experimental test system with which to develop interdisciplinary controls, structures, and electromagnetics technology for large space antennas. The program objective is to study interdisciplinary issues important in optimizing large space antenna performance for a variety of potential users. The 15-meter antenna utilizes a hoop column structural concept with a gold-plated molybdenum mesh reflector. One feature of the design is the use of adjustable control cables to improve the paraboloid reflector shape. Manual adjustment of the cords after initial deployment improved surface smoothness relative to the build accuracy from 0.140 in. RMS to 0.070 in. Preliminary structural dynamics tests and near-field electromagnetic tests were made. The antenna is now being modified for further testing. Modifications include addition of a precise motorized control cord adjustment system to make the reflector surface smoother and an adaptive feed for electronic compensation of reflector surface distortions. Although the previous test results show good agreement between calculated and measured values, additional work is needed to study modelling limits for each discipline, evaluate the potential of adaptive feed compensation, and study closed-loop control performance in a dynamic environment.

  1. Performance Benefits Associated with Context-Dependent Arm Pointing Adaptation

    NASA Technical Reports Server (NTRS)

    Seidler, R. D.; Bloomberg, J. J.; Stelmach, George E.

    2000-01-01

    Our previous work has demonstrated that head orientation can be used as a contextual cue to switch between mUltiple adaptive states. Subjects were assigned to one of three groups: the head orientation group tilted the head towards the right shoulder when drawing under a 0.5 gain of display and towards the left shoulder when drawing under a 1.5 gain of display; the target orientation group had the home & target positions rotated counterclockwise when drawing under the 0.5 gain and clockwise for the l.5 gain; the arm posture group changed the elbow angle of the arm they were not drawing with from full flexion to full extension with 0.5 and l.5 gain display changes. The head orientation cue was effectively associated with the multiple gains, in comparison to the control conditions. The purpose of the current investigation was to determine whether this context-dependent adaptation results in any savings in terms of performance measures such as movement duration and movement smoothness when subjects switch between multiple adaptive states. Subjects in the head adaptation group demonstrated reduced movement duration and increased movement smoothness (measured via normalized j erk scores) in comparison to the two control groups when switching between the 0.5 and 1.5 gain. of display. This work has demonstrated not only that subjects can acquire context-dependent adaptation, but also that it results in a significant savings of performance upon transfer between adaptive states

  2. Optimizing Hydronic System Performance in Residential Applications

    SciTech Connect

    Arena, L.; Faakye, O.

    2013-10-01

    Even though new homes constructed with hydronic heat comprise only 3% of the market (US Census Bureau 2009), of the 115 million existing homes in the United States, almost 14 million of those homes (11%) are heated with steam or hot water systems according to 2009 US Census data. Therefore, improvements in hydronic system performance could result in significant energy savings in the US. When operating properly, the combination of a gas-fired condensing boiler with baseboard convectors and an indirect water heater is a viable option for high-efficiency residential space heating in cold climates. Based on previous research efforts, however, it is apparent that these types of systems are typically not designed and installed to achieve maximum efficiency. Furthermore, guidance on proper design and commissioning for heating contractors and energy consultants is hard to find and is not comprehensive. Through modeling and monitoring, CARB sought to determine the optimal combination(s) of components - pumps, high efficiency heat sources, plumbing configurations and controls - that result in the highest overall efficiency for a hydronic system when baseboard convectors are used as the heat emitter. The impact of variable-speed pumps on energy use and system performance was also investigated along with the effects of various control strategies and the introduction of thermal mass.

  3. Adaptive Performance Seeking Control Using Fuzzy Model Reference Learning Control and Positive Gradient Control

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    1997-01-01

    Performance Seeking Control attempts to find the operating condition that will generate optimal performance and control the plant at that operating condition. In this paper a nonlinear multivariable Adaptive Performance Seeking Control (APSC) methodology will be developed and it will be demonstrated on a nonlinear system. The APSC is comprised of the Positive Gradient Control (PGC) and the Fuzzy Model Reference Learning Control (FMRLC). The PGC computes the positive gradients of the desired performance function with respect to the control inputs in order to drive the plant set points to the operating point that will produce optimal performance. The PGC approach will be derived in this paper. The feedback control of the plant is performed by the FMRLC. For the FMRLC, the conventional fuzzy model reference learning control methodology is utilized, with guidelines generated here for the effective tuning of the FMRLC controller.

  4. GASIFICATION PLANT COST AND PERFORMANCE OPTIMIZATION

    SciTech Connect

    Samuel S. Tam

    2002-05-01

    The goal of this series of design and estimating efforts was to start from the as-built design and actual operating data from the DOE sponsored Wabash River Coal Gasification Repowering Project and to develop optimized designs for several coal and petroleum coke IGCC power and coproduction projects. First, the team developed a design for a grass-roots plant equivalent to the Wabash River Coal Gasification Repowering Project to provide a starting point and a detailed mid-year 2000 cost estimate based on the actual as-built plant design and subsequent modifications (Subtask 1.1). This unoptimized plant has a thermal efficiency of 38.3% (HHV) and a mid-year 2000 EPC cost of 1,681 $/kW. This design was enlarged and modified to become a Petroleum Coke IGCC Coproduction Plant (Subtask 1.2) that produces hydrogen, industrial grade steam, and fuel gas for an adjacent Gulf Coast petroleum refinery in addition to export power. A structured Value Improving Practices (VIP) approach was applied to reduce costs and improve performance. The base case (Subtask 1.3) Optimized Petroleum Coke IGCC Coproduction Plant increased the power output by 16% and reduced the plant cost by 23%. The study looked at several options for gasifier sparing to enhance availability. Subtask 1.9 produced a detailed report on this availability analyses study. The Subtask 1.3 Next Plant, which retains the preferred spare gasification train approach, only reduced the cost by about 21%, but it has the highest availability (94.6%) and produces power at 30 $/MW-hr (at a 12% ROI). Thus, such a coke-fueled IGCC coproduction plant could fill a near term niche market. In all cases, the emissions performance of these plants is superior to the Wabash River project. Subtasks 1.5A and B developed designs for single-train coal and coke-fueled power plants. This side-by-side comparison of these plants, which contain the Subtask 1.3 VIP enhancements, showed their similarity both in design and cost (1,318 $/kW for the

  5. Towards the adaptive optimization of field-free molecular alignment

    NASA Astrophysics Data System (ADS)

    Rouzée, Arnaud; Hertz, Edouard; Lavorel, Bruno; Faucher, Olivier

    2008-04-01

    We theoretically report the optimization of field-free molecular alignment by spectral phase shaping of femtosecond laser pulses. Optimal pulse shapes are designed iteratively by an evolutionary algorithm in conjunction with a non-perturbative regime calculation. The investigation is conducted in O2 and N2 under realistic conditions of intensity, temperature and pulse shaping. We demonstrate that specific tailored pulses can provide significant maximization of field-free alignment compared to the Fourier transform limited pulses of the same energy. The underlying control mechanism is discussed. The effect of pulse energy and temperature is analysed leading to the identification of a general criteria for a successful optimization. Finally, the optimal spectral phase learned from the algorithm is rather smooth and can be described by a representation in terms of a sigmoidal function. We show that the use of a low-dimensional parametrization of the phase yields an efficient optimization of the alignment within a highly reduced convergence time.

  6. GASIFICATION PLANT COST AND PERFORMANCE OPTIMIZATION

    SciTech Connect

    Sheldon Kramer

    2003-09-01

    This project developed optimized designs and cost estimates for several coal and petroleum coke IGCC coproduction projects that produced hydrogen, industrial grade steam, and hydrocarbon liquid fuel precursors in addition to power. The as-built design and actual operating data from the DOE sponsored Wabash River Coal Gasification Repowering Project was the starting point for this study that was performed by Bechtel, Global Energy and Nexant under Department of Energy contract DE-AC26-99FT40342. First, the team developed a design for a grass-roots plant equivalent to the Wabash River Coal Gasification Repowering Project to provide a starting point and a detailed mid-year 2000 cost estimate based on the actual as-built plant design and subsequent modifications (Subtask 1.1). This non-optimized plant has a thermal efficiency to power of 38.3% (HHV) and a mid-year 2000 EPC cost of 1,681 $/kW.1 This design was enlarged and modified to become a Petroleum Coke IGCC Coproduction Plant (Subtask 1.2) that produces hydrogen, industrial grade steam, and fuel gas for an adjacent Gulf Coast petroleum refinery in addition to export power. A structured Value Improving Practices (VIP) approach was applied to reduce costs and improve performance. The base case (Subtask 1.3) Optimized Petroleum Coke IGCC Coproduction Plant increased the power output by 16% and reduced the plant cost by 23%. The study looked at several options for gasifier sparing to enhance availability. Subtask 1.9 produced a detailed report on this availability analyses study. The Subtask 1.3 Next Plant, which retains the preferred spare gasification train approach, only reduced the cost by about 21%, but it has the highest availability (94.6%) and produces power at 30 $/MW-hr (at a 12% ROI). Thus, such a coke-fueled IGCC coproduction plant could fill a near term niche market. In all cases, the emissions performance of these plants is superior to the Wabash River project. Subtasks 1.5A and B developed designs for

  7. Skeletal adaptation to external loads optimizes mechanical properties: fact or fiction

    NASA Technical Reports Server (NTRS)

    Turner, R. T.

    2001-01-01

    The skeleton adapts to a changing mechanical environment but the widely held concept that bone cells are programmed to respond to local mechanical loads to produce an optimal mechanical structure is not consistent with the high frequency of bone fractures. Instead, the author suggests that other important functions of bone compete with mechanical adaptation to determine structure. As a consequence of competing demands, bone architecture never achieves an optimal mechanical structure. c2001 Lippincott Williams & Wilkins, Inc.

  8. Optimal Multitrial Prediction Combination and Subject-Specific Adaptation for Minimal Training Brain Switch Designs.

    PubMed

    Spyrou, Loukianos; Blokland, Yvonne; Farquhar, Jason; Bruhn, Jorgen

    2016-06-01

    Brain-Computer Interface (BCI) systems are traditionally designed by taking into account user-specific data to enable practical use. More recently, subject independent (SI) classification algorithms have been developed which bypass the subject specific adaptation and enable rapid use of the system. A brain switch is a particular BCI system where the system is required to distinguish from two separate mental tasks corresponding to the on-off commands of a switch. Such applications require a low false positive rate (FPR) while having an acceptable response time (RT) until the switch is activated. In this work, we develop a methodology that produces optimal brain switch behavior through subject specific (SS) adaptation of: a) a multitrial prediction combination model and b) an SI classification model. We propose a statistical model of combining classifier predictions that enables optimal FPR calibration through a short calibration session. We trained an SI classifier on a training synchronous dataset and tested our method on separate holdout synchronous and asynchronous brain switch experiments. Although our SI model obtained similar performance between training and holdout datasets, 86% and 85% for the synchronous and 69% and 66% for the asynchronous the between subject FPR and TPR variability was high (up to 62%). The short calibration session was then employed to alleviate that problem and provide decision thresholds that achieve when possible a target FPR=1% with good accuracy for both datasets. PMID:26529768

  9. A self-adaptive parameter optimization algorithm in a real-time parallel image processing system.

    PubMed

    Li, Ge; Zhang, Xuehe; Zhao, Jie; Zhang, Hongli; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    Aiming at the stalemate that precision, speed, robustness, and other parameters constrain each other in the parallel processed vision servo system, this paper proposed an adaptive load capacity balance strategy on the servo parameters optimization algorithm (ALBPO) to improve the computing precision and to achieve high detection ratio while not reducing the servo circle. We use load capacity functions (LC) to estimate the load for each processor and then make continuous self-adaptation towards a balanced status based on the fluctuated LC results; meanwhile, we pick up a proper set of target detection and location parameters according to the results of LC. Compared with current load balance algorithm, the algorithm proposed in this paper is proceeded under an unknown informed status about the maximum load and the current load of the processors, which means it has great extensibility. Simulation results showed that the ALBPO algorithm has great merits on load balance performance, realizing the optimization of QoS for each processor, fulfilling the balance requirements of servo circle, precision, and robustness of the parallel processed vision servo system. PMID:24174920

  10. A Reflective Gaussian Coronagraph for Extreme Adaptive Optics: Laboratory Performance

    NASA Astrophysics Data System (ADS)

    Park, Ryeojin; Close, Laird M.; Siegler, Nick; Nielsen, Eric L.; Stalcup, Thomas

    2006-11-01

    We report laboratory results of a coronagraphic test bench to assess the intensity reduction differences between a ``Gaussian'' tapered focal plane coronagraphic mask and a classical hard-edged ``top hat'' function mask at extreme adaptive optics (ExAO) Strehl ratios of ~94%. However, unlike a traditional coronagraph design, we insert a reflective focal plane mask at 45° to the optical axis. We also use an intermediate secondary mask (mask 2) before a final image in order to block additional mask-edge-diffracted light. The test bench simulates the optical train of ground-based telescopes (in particular, the 8.1 m Gemini North Telescope). It includes one spider vane, different mask radii (r = 1.9λ/D, 3.7λ/D, and 7.4λ/D), and two types of reflective focal plane masks (hard-edged top-hat and Gaussian tapered profiles). In order to investigate the relative performance of these competing coronagraphic designs with regard to extrasolar planet detection sensitivity, we utilize the simulation of realistic extrasolar planet populations (Nielsen et al.). With an appropriate translation of our laboratory results to expected telescope performance, a Gaussian tapered mask radius of 3.7λ/D with an additional mask (mask 2) performs best (highest planet detection sensitivity). For a full survey with this optimal design, the simulation predicts that ~30% more planets would be detected than with a top-hat function mask of similar size with mask 2. Using the best design, the point contrast ratio between the stellar point-spread function (PSF) peak and the coronagraphic PSF at 10λ/D (0.4" in the H band if D = 8.1 m) is ~10 times higher than a classical Lyot top-hat coronagraph. Hence, we find that a Gaussian apodized mask with an additional blocking mask is superior (~10 times higher contrast) to the use of a classical Lyot coronagraph for ExAO-like Strehl ratios.

  11. Stability and Performance Metrics for Adaptive Flight Control

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmanje; Nguyen, Nhan; VanEykeren, Luarens

    2009-01-01

    This paper addresses the problem of verifying adaptive control techniques for enabling safe flight in the presence of adverse conditions. Since the adaptive systems are non-linear by design, the existing control verification metrics are not applicable to adaptive controllers. Moreover, these systems are in general highly uncertain. Hence, the system's characteristics cannot be evaluated by relying on the available dynamical models. This necessitates the development of control verification metrics based on the system's input-output information. For this point of view, a set of metrics is introduced that compares the uncertain aircraft's input-output behavior under the action of an adaptive controller to that of a closed-loop linear reference model to be followed by the aircraft. This reference model is constructed for each specific maneuver using the exact aerodynamic and mass properties of the aircraft to meet the stability and performance requirements commonly accepted in flight control. The proposed metrics are unified in the sense that they are model independent and not restricted to any specific adaptive control methods. As an example, we present simulation results for a wing damaged generic transport aircraft with several existing adaptive controllers.

  12. Neural network based adaptive control of nonlinear plants using random search optimization algorithms

    NASA Technical Reports Server (NTRS)

    Boussalis, Dhemetrios; Wang, Shyh J.

    1992-01-01

    This paper presents a method for utilizing artificial neural networks for direct adaptive control of dynamic systems with poorly known dynamics. The neural network weights (controller gains) are adapted in real time using state measurements and a random search optimization algorithm. The results are demonstrated via simulation using two highly nonlinear systems.

  13. Automatic carrier landing system for V/STOL aircraft using L1 adaptive and optimal control

    NASA Astrophysics Data System (ADS)

    Hariharapura Ramesh, Shashank

    This thesis presents a framework for developing automatic carrier landing systems for aircraft with vertical or short take-off and landing capability using two different control strategies---gain-scheduled linear optimal control, and L1 adaptive control. The carrier landing sequence of V/STOL aircraft involves large variations in dynamic pressure and aerodynamic coefficients arising because of the transition from aerodynamic-supported to jet-borne flight, descent to the touchdown altitude, and turns performed to align with the runway. Consequently, the dynamics of the aircraft exhibit a highly non-linear dynamical behavior with variations in flight conditions prior to touchdown. Therefore, the implication is the need for non-linear control techniques to achieve automatic landing. Gain-scheduling has been one of the most widely employed techniques for control of aircraft, which involves designing linear controllers for numerous trimmed flight conditions, and interpolating them to achieve a global non-linear control. Adaptive control technique, on the other hand, eliminates the need to schedule the controller parameters as they adapt to changing flight conditions.

  14. Optimization of an adaptive SPECT system with the scanning linear estimator

    NASA Astrophysics Data System (ADS)

    Ghanbari, Nasrin; Clarkson, Eric; Kupinski, Matthew A.; Li, Xin

    2015-08-01

    The adaptive single-photon emission computed tomography (SPECT) system studied here acquires an initial scout image to obtain preliminary information about the object. Then the configuration is adjusted by selecting the size of the pinhole and the magnification that optimize system performance on an ensemble of virtual objects generated to be consistent with the scout data. In this study the object is a lumpy background that contains a Gaussian signal with a variable width and amplitude. The virtual objects in the ensemble are imaged by all of the available configurations and the subsequent images are evaluated with the scanning linear estimator to obtain an estimate of the signal width and amplitude. The ensemble mean squared error (EMSE) on the virtual ensemble between the estimated and the true parameters serves as the performance figure of merit for selecting the optimum configuration. The results indicate that variability in the original object background, noise and signal parameters leads to a specific optimum configuration in each case. A statistical study carried out for a number of objects show that the adaptive system on average performs better than its nonadaptive counterpart.

  15. Performance breakdown in optimal stimulus decoding

    NASA Astrophysics Data System (ADS)

    Kostal, Lubomir; Lansky, Petr; Pilarski, Stevan

    2015-06-01

    Objective. One of the primary goals of neuroscience is to understand how neurons encode and process information about their environment. The problem is often approached indirectly by examining the degree to which the neuronal response reflects the stimulus feature of interest. Approach. In this context, the methods of signal estimation and detection theory provide the theoretical limits on the decoding accuracy with which the stimulus can be identified. The Cramér-Rao lower bound on the decoding precision is widely used, since it can be evaluated easily once the mathematical model of the stimulus-response relationship is determined. However, little is known about the behavior of different decoding schemes with respect to the bound if the neuronal population size is limited. Main results. We show that under broad conditions the optimal decoding displays a threshold-like shift in performance in dependence on the population size. The onset of the threshold determines a critical range where a small increment in size, signal-to-noise ratio or observation time yields a dramatic gain in the decoding precision. Significance. We demonstrate the existence of such threshold regions in early auditory and olfactory information coding. We discuss the origin of the threshold effect and its impact on the design of effective coding approaches in terms of relevant population size.

  16. High performance magnet power supply optimization

    SciTech Connect

    Jackson, L.T.

    1988-01-01

    The power supply system for the joint LBL--SLAC proposed accelerator PEP provides the opportunity to take a fresh look at the current techniques employed for controlling large amounts of dc power and the possibility of using a new one. A basic requirement of +- 100 ppM regulation is placed on the guide field of the bending magnets and quadrupoles placed around the 2200 meter circumference of the accelerator. The optimization questions to be answered by this paper are threefold: Can a firing circuit be designed to reduce the combined effects of the harmonics and line voltage combined effects of the harmonics and line voltage unbalance to less than 100 ppM in the magnet field. Given the ambiguity of the previous statement, is the addition of a transistor bank to a nominal SCR controlled system the way to go or should one opt for an SCR chopper system running at 1 KHz where multiple supplies are fed from one large dc bus and the cost--performance evaluation of the three possible systems.

  17. MSFC Turbine Performance Optimization (TPO) Technology Verification Status

    NASA Technical Reports Server (NTRS)

    Griffin, Lisa W.; Dorney, Daniel J.; Snellgrove, Lauren M.; Zoladz, Thomas F.; Stroud, Richard T.; Turner, James E. (Technical Monitor)

    2002-01-01

    Capability to optimize for turbine performance and accurately predict unsteady loads will allow for increased reliability, Isp, and thrust-to-weight. The development of a fast, accurate, validated aerodynamic design, analysis, and optimization system is required.

  18. Effects of additional interfering signals on adaptive array performance

    NASA Technical Reports Server (NTRS)

    Moses, Randolph L.

    1989-01-01

    The effects of additional interference signals on the performance of a fully adaptive array are considered. The case where the number of interference signals exceeds the number of array degrees of freedom is addressed. It is shown how performance is affected as a function of the number of array elements, the number of interference signals, and the directivity of the array antennas. By using directive auxiliary elements, the performance of the array can be as good as the performance when the additional interference signals are not present.

  19. Adaptive Sensor Optimization and Cognitive Image Processing Using Autonomous Optical Neuroprocessors

    SciTech Connect

    CAMERON, STEWART M.

    2001-10-01

    Measurement and signal intelligence demands has created new requirements for information management and interoperability as they affect surveillance and situational awareness. Integration of on-board autonomous learning and adaptive control structures within a remote sensing platform architecture would substantially improve the utility of intelligence collection by facilitating real-time optimization of measurement parameters for variable field conditions. A problem faced by conventional digital implementations of intelligent systems is the conflict between a distributed parallel structure on a sequential serial interface functionally degrading bandwidth and response time. In contrast, optically designed networks exhibit the massive parallelism and interconnect density needed to perform complex cognitive functions within a dynamic asynchronous environment. Recently, all-optical self-organizing neural networks exhibiting emergent collective behavior which mimic perception, recognition, association, and contemplative learning have been realized using photorefractive holography in combination with sensory systems for feature maps, threshold decomposition, image enhancement, and nonlinear matched filters. Such hybrid information processors depart from the classical computational paradigm based on analytic rules-based algorithms and instead utilize unsupervised generalization and perceptron-like exploratory or improvisational behaviors to evolve toward optimized solutions. These systems are robust to instrumental systematics or corrupting noise and can enrich knowledge structures by allowing competition between multiple hypotheses. This property enables them to rapidly adapt or self-compensate for dynamic or imprecise conditions which would be unstable using conventional linear control models. By incorporating an intelligent optical neuroprocessor in the back plane of an imaging sensor, a broad class of high-level cognitive image analysis problems including geometric

  20. Performance of the Gemini Planet Imager's adaptive optics system.

    PubMed

    Poyneer, Lisa A; Palmer, David W; Macintosh, Bruce; Savransky, Dmitry; Sadakuni, Naru; Thomas, Sandrine; Véran, Jean-Pierre; Follette, Katherine B; Greenbaum, Alexandra Z; Ammons, S Mark; Bailey, Vanessa P; Bauman, Brian; Cardwell, Andrew; Dillon, Daren; Gavel, Donald; Hartung, Markus; Hibon, Pascale; Perrin, Marshall D; Rantakyrö, Fredrik T; Sivaramakrishnan, Anand; Wang, Jason J

    2016-01-10

    The Gemini Planet Imager's adaptive optics (AO) subsystem was designed specifically to facilitate high-contrast imaging. A definitive description of the system's algorithms and technologies as built is given. 564 AO telemetry measurements from the Gemini Planet Imager Exoplanet Survey campaign are analyzed. The modal gain optimizer tracks changes in atmospheric conditions. Science observations show that image quality can be improved with the use of both the spatially filtered wavefront sensor and linear-quadratic-Gaussian control of vibration. The error budget indicates that for all targets and atmospheric conditions AO bandwidth error is the largest term. PMID:26835769

  1. Robust, integrated computational control of NMR experiments to achieve optimal assignment by ADAPT-NMR.

    PubMed

    Bahrami, Arash; Tonelli, Marco; Sahu, Sarata C; Singarapu, Kiran K; Eghbalnia, Hamid R; Markley, John L

    2012-01-01

    ADAPT-NMR (Assignment-directed Data collection Algorithm utilizing a Probabilistic Toolkit in NMR) represents a groundbreaking prototype for automated protein structure determination by nuclear magnetic resonance (NMR) spectroscopy. With a [(13)C,(15)N]-labeled protein sample loaded into the NMR spectrometer, ADAPT-NMR delivers complete backbone resonance assignments and secondary structure in an optimal fashion without human intervention. ADAPT-NMR achieves this by implementing a strategy in which the goal of optimal assignment in each step determines the subsequent step by analyzing the current sum of available data. ADAPT-NMR is the first iterative and fully automated approach designed specifically for the optimal assignment of proteins with fast data collection as a byproduct of this goal. ADAPT-NMR evaluates the current spectral information, and uses a goal-directed objective function to select the optimal next data collection step(s) and then directs the NMR spectrometer to collect the selected data set. ADAPT-NMR extracts peak positions from the newly collected data and uses this information in updating the analysis resonance assignments and secondary structure. The goal-directed objective function then defines the next data collection step. The procedure continues until the collected data support comprehensive peak identification, resonance assignments at the desired level of completeness, and protein secondary structure. We present test cases in which ADAPT-NMR achieved results in two days or less that would have taken two months or more by manual approaches. PMID:22427982

  2. Real-Time Adaptive Least-Squares Drag Minimization for Performance Adaptive Aeroelastic Wing

    NASA Technical Reports Server (NTRS)

    Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric

    2016-01-01

    This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.

  3. Adaptation of NASA technology for the optimization of orthopedic knee implants

    NASA Technical Reports Server (NTRS)

    Saravanos, D. A.; Mraz, P. J.; Hopkins, D. A.

    1991-01-01

    The NASA technology originally developed for the optimization of composite structures (engine blades) is adapted and applied to the optimization of orthopedic knee implants. A method is developed enabling the tailoring of the implant for optimal interaction with the environment of the tibia. The shape of the implant components are optimized, such that the stresses in the bone are favorably controlled to minimize bone degradation and prevent failures. A pilot tailoring system is developed and the feasibility of the concept is elevated. The optimization system is expected to provide the means for improving knee prosthesis and individual implant tailoring for each patient.

  4. Adaptive time-lapse optimized survey design for electrical resistivity tomography monitoring

    NASA Astrophysics Data System (ADS)

    Wilkinson, Paul B.; Uhlemann, Sebastian; Meldrum, Philip I.; Chambers, Jonathan E.; Carrière, Simon; Oxby, Lucy S.; Loke, M. H.

    2015-10-01

    Adaptive optimal experimental design methods use previous data and results to guide the choice and design of future experiments. This paper describes the formulation of an adaptive survey design technique to produce optimal resistivity imaging surveys for time-lapse geoelectrical monitoring experiments. These survey designs are time-dependent and, compared to dipole-dipole or static optimized surveys that do not change over time, focus a greater degree of the image resolution on regions of the subsurface that are actively changing. The adaptive optimization method is validated using a controlled laboratory monitoring experiment comprising a well-defined cylindrical target moving along a trajectory that changes its depth and lateral position. The algorithm is implemented on a standard PC in conjunction with a modified automated multichannel resistivity imaging system. Data acquisition using the adaptive survey designs requires no more time or power than with comparable standard surveys, and the algorithm processing takes place while the system batteries recharge. The results show that adaptively designed optimal surveys yield a quantitative increase in image quality over and above that produced by using standard dipole-dipole or static (time-independent) optimized surveys.

  5. Generalized Monge-Kantorovich optimization for grid generation and adaptation in LP

    SciTech Connect

    Delzanno, G L; Finn, J M

    2009-01-01

    The Monge-Kantorovich grid generation and adaptation scheme of is generalized from a variational principle based on L{sub 2} to a variational principle based on L{sub p}. A generalized Monge-Ampere (MA) equation is derived and its properties are discussed. Results for p > 1 are obtained and compared in terms of the quality of the resulting grid. We conclude that for the grid generation application, the formulation based on L{sub p} for p close to unity leads to serious problems associated with the boundary. Results for 1.5 {approx}< p {approx}< 2.5 are quite good, but there is a fairly narrow range around p = 2 where the results are close to optimal with respect to grid distortion. Furthermore, the Newton-Krylov methods used to solve the generalized MA equation perform best for p = 2.

  6. Performance index and meta-optimization of a direct search optimization method

    NASA Astrophysics Data System (ADS)

    Krus, P.; Ölvander, J.

    2013-10-01

    Design optimization is becoming an increasingly important tool for design, often using simulation as part of the evaluation of the objective function. A measure of the efficiency of an optimization algorithm is of great importance when comparing methods. The main contribution of this article is the introduction of a singular performance criterion, the entropy rate index based on Shannon's information theory, taking both reliability and rate of convergence into account. It can also be used to characterize the difficulty of different optimization problems. Such a performance criterion can also be used for optimization of the optimization algorithms itself. In this article the Complex-RF optimization method is described and its performance evaluated and optimized using the established performance criterion. Finally, in order to be able to predict the resources needed for optimization an objective function temperament factor is defined that indicates the degree of difficulty of the objective function.

  7. Micro Benchmarking, Performance Assertions and Sensitivity Analysis: A Technique for Developing Adaptive Grid Applications

    SciTech Connect

    Corey, I R; Johnson, J R; Vetter, J S

    2002-02-25

    This study presents a technique that can significantly improve the performance of a distributed application by allowing the application to locally adapt to architectural characteristics of distinct resources in a distributed system. Application performance is sensitive to application parameter--system architecture pairings. In a distributed or Grid enabled applciation, a single parameter configuration for the whole application will not always be optimal for every participating resource. In particular, some configurations can significantly degrade performance. Furthermore, the behavior of a system may change during the course of the run. The technique described here provides an automated mechanism for run-time adaptation of application parameters to the local system architecture. Using a simulation of a Monte Carlo physics code, the authors demonstrate that this technique can achieve speedups of 18%-37% on individual resources in a distributed environment.

  8. High-Performance Reactive Fluid Flow Simulations Using Adaptive Mesh Refinement on Thousands of Processors

    NASA Astrophysics Data System (ADS)

    Calder, A. C.; Curtis, B. C.; Dursi, L. J.; Fryxell, B.; Henry, G.; MacNeice, P.; Olson, K.; Ricker, P.; Rosner, R.; Timmes, F. X.; Tufo, H. M.; Truran, J. W.; Zingale, M.

    We present simulations and performance results of nuclear burning fronts in supernovae on the largest domain and at the finest spatial resolution studied to date. These simulations were performed on the Intel ASCI-Red machine at Sandia National Laboratories using FLASH, a code developed at the Center for Astrophysical Thermonuclear Flashes at the University of Chicago. FLASH is a modular, adaptive mesh, parallel simulation code capable of handling compressible, reactive fluid flows in astrophysical environments. FLASH is written primarily in Fortran 90, uses the Message-Passing Interface library for inter-processor communication and portability, and employs the PARAMESH package to manage a block-structured adaptive mesh that places blocks only where the resolution is required and tracks rapidly changing flow features, such as detonation fronts, with ease. We describe the key algorithms and their implementation as well as the optimizations required to achieve sustained performance of 238 GLOPS on 6420 processors of ASCI-Red in 64-bit arithmetic.

  9. Design optimization of long period waveguide grating devices for refractive index sensing using adaptive particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Semwal, Girish; Rastogi, Vipul

    2016-01-01

    Grating assisted surface plasmon resonance waveguide grating has been designed and optimized for the sensing application. Adaptive particle swarm optimization in conjunction with derivative free method for mode computation has been used for design optimization of LPWG sensor. Effect of metal thickness and cladding layer thickness on the core mode and surface plasmon mode has been analyzed in detail. Results have been utilized as benchmarks for deciding the bounds of these variables in the optimization process. Two waveguides structures have been demonstrated for the grating assisted surface plasmon resonance refractive index sensor. The sensitivity of the designed sensors has been achieved 3.5×104 nm/RIU and 5.0×104 nm/RIU with optimized waveguide and grating parameters.

  10. Adaptive prefetching on POWER7: Improving performance and power consumption

    SciTech Connect

    Jimenez, Victor; Cazorla, Francisco; Gioiosa, Roberto; Buyuktosunoglu, Alper; Bose, Pradip; O'Connel, Francis P.; Mealey, Bruce G.

    2014-10-03

    Hardware data prefetch engines are integral parts of many general purpose server-class microprocessors in the field today. Some prefetch engines allow users to change some of their parameters. But, the prefetcher is usually enabled in a default configuration during system bring-up, and dynamic reconfiguration of the prefetch engine is not an autonomic feature of current machines. Conceptually, however, it is easy to infer that commonly used prefetch algorithms—when applied in a fixed mode—will not help performance in many cases. In fact, they may actually degrade performance due to useless bus bandwidth consumption and cache pollution, which in turn, will also waste power. We present an adaptive prefetch scheme that dynamically modifies the prefetch settings in order to adapt to workloads

  11. Adaptive optimal spectral range for dynamically changing scene

    NASA Astrophysics Data System (ADS)

    Pinsky, Ephi; Siman-tov, Avihay; Peles, David

    2012-06-01

    A novel multispectral video system that continuously optimizes both its spectral range channels and the exposure time of each channel autonomously, under dynamic scenes, varying from short range-clear scene to long range-poor visibility, is currently being developed. Transparency and contrast of high scattering medium of channels with spectral ranges in the near infrared is superior to the visible channels, particularly to the blue range. Longer wavelength spectral ranges that induce higher contrast are therefore favored. Images of 3 spectral channels are fused and displayed for (pseudo) color visualization, as an integrated high contrast video stream. In addition to the dynamic optimization of the spectral channels, optimal real-time exposure time is adjusted simultaneously and autonomously for each channel. A criterion of maximum average signal, derived dynamically from previous frames of the video stream is used (Patent Application - International Publication Number: WO2009/093110 A2, 30.07.2009). This configuration enables dynamic compatibility with the optimal exposure time of a dynamically changing scene. It also maximizes the signal to noise ratio and compensates each channel for the specified value of daylight reflections and sensors response for each spectral range. A possible implementation is a color video camera based on 4 synchronized, highly responsive, CCD imaging detectors, attached to a 4CCD dichroic prism and combined with a common, color corrected, lens. Principal Components Analysis (PCA) technique is then applied for real time "dimensional collapse" in color space, in order to select and fuse, for clear color visualization, the 3 most significant principal channels out of at least 4 characterized by high contrast and rich details in the image data.

  12. Sensorimotor Adaptability Training Improves Motor and Dual-Task Performance

    NASA Technical Reports Server (NTRS)

    Bloomberg, J.J.; Peters, B.T.; Mulavara, A.P.; Brady, R.; Batson, C.; Cohen, H.S.

    2009-01-01

    The overall objective of our project is to develop a sensorimotor adaptability (SA) training program designed to facilitate recovery of functional capabilities when astronauts transition to different gravitational environments. The goal of our current study was to determine if SA training using variation in visual flow and support surface motion produces improved performance in a novel sensory environment and demonstrate the retention characteristics of SA training.

  13. Gasification Plant Cost and Performance Optimization

    SciTech Connect

    Samuel Tam; Alan Nizamoff; Sheldon Kramer; Scott Olson; Francis Lau; Mike Roberts; David Stopek; Robert Zabransky; Jeffrey Hoffmann; Erik Shuster; Nelson Zhan

    2005-05-01

    As part of an ongoing effort of the U.S. Department of Energy (DOE) to investigate the feasibility of gasification on a broader level, Nexant, Inc. was contracted to perform a comprehensive study to provide a set of gasification alternatives for consideration by the DOE. Nexant completed the first two tasks (Tasks 1 and 2) of the ''Gasification Plant Cost and Performance Optimization Study'' for the DOE's National Energy Technology Laboratory (NETL) in 2003. These tasks evaluated the use of the E-GAS{trademark} gasification technology (now owned by ConocoPhillips) for the production of power either alone or with polygeneration of industrial grade steam, fuel gas, hydrocarbon liquids, or hydrogen. NETL expanded this effort in Task 3 to evaluate Gas Technology Institute's (GTI) fluidized bed U-GAS{reg_sign} gasifier. The Task 3 study had three main objectives. The first was to examine the application of the gasifier at an industrial application in upstate New York using a Southeastern Ohio coal. The second was to investigate the GTI gasifier in a stand-alone lignite-fueled IGCC power plant application, sited in North Dakota. The final goal was to train NETL personnel in the methods of process design and systems analysis. These objectives were divided into five subtasks. Subtasks 3.2 through 3.4 covered the technical analyses for the different design cases. Subtask 3.1 covered management activities, and Subtask 3.5 covered reporting. Conceptual designs were developed for several coal gasification facilities based on the fluidized bed U-GAS{reg_sign} gasifier. Subtask 3.2 developed two base case designs for industrial combined heat and power facilities using Southeastern Ohio coal that will be located at an upstate New York location. One base case design used an air-blown gasifier, and the other used an oxygen-blown gasifier in order to evaluate their relative economics. Subtask 3.3 developed an advanced design for an air-blown gasification combined heat and power

  14. Optimizing single-nanoparticle two-photon microscopy by in situ adaptive control of femtosecond pulses

    NASA Astrophysics Data System (ADS)

    Li, Donghai; Deng, Yongkai; Chu, Saisai; Jiang, Hongbing; Wang, Shufeng; Gong, Qihuang

    2016-07-01

    Single-nanoparticle two-photon microscopy shows great application potential in super-resolution cell imaging. Here, we report in situ adaptive optimization of single-nanoparticle two-photon luminescence signals by phase and polarization modulations of broadband laser pulses. For polarization-independent quantum dots, phase-only optimization was carried out to compensate the phase dispersion at the focus of the objective. Enhancement of the two-photon excitation fluorescence intensity under dispersion-compensated femtosecond pulses was achieved. For polarization-dependent single gold nanorod, in situ polarization optimization resulted in further enhancement of two-photon photoluminescence intensity than phase-only optimization. The application of in situ adaptive control of femtosecond pulse provides a way for object-oriented optimization of single-nanoparticle two-photon microscopy for its future applications.

  15. Prediction-based manufacturing center self-adaptive demand side energy optimization in cyber physical systems

    NASA Astrophysics Data System (ADS)

    Sun, Xinyao; Wang, Xue; Wu, Jiangwei; Liu, Youda

    2014-05-01

    Cyber physical systems(CPS) recently emerge as a new technology which can provide promising approaches to demand side management(DSM), an important capability in industrial power systems. Meanwhile, the manufacturing center is a typical industrial power subsystem with dozens of high energy consumption devices which have complex physical dynamics. DSM, integrated with CPS, is an effective methodology for solving energy optimization problems in manufacturing center. This paper presents a prediction-based manufacturing center self-adaptive energy optimization method for demand side management in cyber physical systems. To gain prior knowledge of DSM operating results, a sparse Bayesian learning based componential forecasting method is introduced to predict 24-hour electric load levels for specific industrial areas in China. From this data, a pricing strategy is designed based on short-term load forecasting results. To minimize total energy costs while guaranteeing manufacturing center service quality, an adaptive demand side energy optimization algorithm is presented. The proposed scheme is tested in a machining center energy optimization experiment. An AMI sensing system is then used to measure the demand side energy consumption of the manufacturing center. Based on the data collected from the sensing system, the load prediction-based energy optimization scheme is implemented. By employing both the PSO and the CPSO method, the problem of DSM in the manufacturing center is solved. The results of the experiment show the self-adaptive CPSO energy optimization method enhances optimization by 5% compared with the traditional PSO optimization method.

  16. Natural selection fails to optimize mutation rates for long-term adaptation on rugged fitness landscapes.

    PubMed

    Clune, Jeff; Misevic, Dusan; Ofria, Charles; Lenski, Richard E; Elena, Santiago F; Sanjuán, Rafael

    2008-01-01

    The rate of mutation is central to evolution. Mutations are required for adaptation, yet most mutations with phenotypic effects are deleterious. As a consequence, the mutation rate that maximizes adaptation will be some intermediate value. Here, we used digital organisms to investigate the ability of natural selection to adjust and optimize mutation rates. We assessed the optimal mutation rate by empirically determining what mutation rate produced the highest rate of adaptation. Then, we allowed mutation rates to evolve, and we evaluated the proximity to the optimum. Although we chose conditions favorable for mutation rate optimization, the evolved rates were invariably far below the optimum across a wide range of experimental parameter settings. We hypothesized that the reason that mutation rates evolved to be suboptimal was the ruggedness of fitness landscapes. To test this hypothesis, we created a simplified landscape without any fitness valleys and found that, in such conditions, populations evolved near-optimal mutation rates. In contrast, when fitness valleys were added to this simple landscape, the ability of evolving populations to find the optimal mutation rate was lost. We conclude that rugged fitness landscapes can prevent the evolution of mutation rates that are optimal for long-term adaptation. This finding has important implications for applied evolutionary research in both biological and computational realms. PMID:18818724

  17. Natural Selection Fails to Optimize Mutation Rates for Long-Term Adaptation on Rugged Fitness Landscapes

    PubMed Central

    Clune, Jeff; Misevic, Dusan; Ofria, Charles; Lenski, Richard E.; Elena, Santiago F.; Sanjuán, Rafael

    2008-01-01

    The rate of mutation is central to evolution. Mutations are required for adaptation, yet most mutations with phenotypic effects are deleterious. As a consequence, the mutation rate that maximizes adaptation will be some intermediate value. Here, we used digital organisms to investigate the ability of natural selection to adjust and optimize mutation rates. We assessed the optimal mutation rate by empirically determining what mutation rate produced the highest rate of adaptation. Then, we allowed mutation rates to evolve, and we evaluated the proximity to the optimum. Although we chose conditions favorable for mutation rate optimization, the evolved rates were invariably far below the optimum across a wide range of experimental parameter settings. We hypothesized that the reason that mutation rates evolved to be suboptimal was the ruggedness of fitness landscapes. To test this hypothesis, we created a simplified landscape without any fitness valleys and found that, in such conditions, populations evolved near-optimal mutation rates. In contrast, when fitness valleys were added to this simple landscape, the ability of evolving populations to find the optimal mutation rate was lost. We conclude that rugged fitness landscapes can prevent the evolution of mutation rates that are optimal for long-term adaptation. This finding has important implications for applied evolutionary research in both biological and computational realms. PMID:18818724

  18. Adaptive Edge Detection Using Adjusted ANT Colony Optimization

    NASA Astrophysics Data System (ADS)

    Davoodianidaliki, M.; Abedini, A.; Shankayi, M.

    2013-09-01

    Edges contain important information in image and edge detection can be considered a low level process in image processing. Among different methods developed for this purpose traditional methods are simple and rather efficient. In Swarm Intelligent methods developed in last decade, ACO is more capable in this process. This paper uses traditional edge detection operators such as Sobel and Canny as input to ACO and turns overall process adaptive to application. Magnitude matrix or edge image can be used for initial pheromone and ant distribution. Image size reduction is proposed as an efficient smoothing method. A few parameters such as area and diameter of travelled path by ants are converted into rules in pheromone update process. All rules are normalized and final value is acquired by averaging.

  19. Optimization-based wavefront sensorless adaptive optics for multiphoton microscopy.

    PubMed

    Antonello, Jacopo; van Werkhoven, Tim; Verhaegen, Michel; Truong, Hoa H; Keller, Christoph U; Gerritsen, Hans C

    2014-06-01

    Optical aberrations have detrimental effects in multiphoton microscopy. These effects can be curtailed by implementing model-based wavefront sensorless adaptive optics, which only requires the addition of a wavefront shaping device, such as a deformable mirror (DM) to an existing microscope. The aberration correction is achieved by maximizing a suitable image quality metric. We implement a model-based aberration correction algorithm in a second-harmonic microscope. The tip, tilt, and defocus aberrations are removed from the basis functions used for the control of the DM, as these aberrations induce distortions in the acquired images. We compute the parameters of a quadratic polynomial that is used to model the image quality metric directly from experimental input-output measurements. Finally, we apply the aberration correction by maximizing the image quality metric using the least-squares estimate of the unknown aberration. PMID:24977374

  20. Verifiable Adaptive Control with Analytical Stability Margins by Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2010-01-01

    This paper presents a verifiable model-reference adaptive control method based on an optimal control formulation for linear uncertain systems. A predictor model is formulated to enable a parameter estimation of the system parametric uncertainty. The adaptation is based on both the tracking error and predictor error. Using a singular perturbation argument, it can be shown that the closed-loop system tends to a linear time invariant model asymptotically under an assumption of fast adaptation. A stability margin analysis is given to estimate a lower bound of the time delay margin using a matrix measure method. Using this analytical method, the free design parameter n of the optimal control modification adaptive law can be determined to meet a specification of stability margin for verification purposes.

  1. Stress for Success: How to Optimize Your Performance.

    ERIC Educational Resources Information Center

    Gmelch, Walter H.

    1983-01-01

    This article explores linkages between stress and effective job performance: while too much stress can lead to burnout, too little stressful stimulation can result in boredom. Generating the proper amount of stress for optimal job performance is discussed. (PP)

  2. Issues in the design and optimization of adaptive optics and laser guide stars for the Keck Telescopes

    SciTech Connect

    Max, C.E.; Gavel, D.T.; Olivier, S.S.

    1994-03-01

    We discuss issues in optimizing the design of adaptive optics and laser guide star systems for the Keck Telescope. The initial tip-tilt system will use Keck`s chopping secondary mirror. We describe design constraints, choice of detector, and expected performance of this tip-tilt system as well as its sky coverage. The adaptive optics system is being optimized for wavelengths of I-2.2{mu}m. We are studying adaptive optics concepts which use a wavefront sensor with varying numbers of subapertures, so as to respond to changing turbulence conditions. The goal is to be able to ``gang together`` groups of deformable mirror subapertures under software control, when conditions call for larger subapertures. We present performance predictions as a function of sky coverage and the number of deformable mirror degrees of freedom. We analyze the predicted brightness several candidate laser guide star systems, as a function of laser power and pulse format. These predictions are used to examine the resulting Strehl as a function of observing wavelength and laser type. We discuss laser waste heat and thermal management issues, and conclude with an overview of instruments under design to take advantage of the Keck adaptive optics system.

  3. Translation and adaptation of functional auditory performance indicators (FAPI)

    PubMed Central

    FERREIRA, Karina; MORET, Adriane Lima Mortari; BEVILACQUA, Maria Cecilia; JACOB, Regina de Souza Tangerino

    2011-01-01

    Work with deaf children has gained new attention since the expectation and goal of therapy has expanded to language development and subsequent language learning. Many clinical tests were developed for evaluation of speech sound perception in young children in response to the need for accurate assessment of hearing skills that developed from the use of individual hearing aids or cochlear implants. These tests also allow the evaluation of the rehabilitation program. However, few of these tests are available in Portuguese. Evaluation with the Functional Auditory Performance Indicators (FAPI) generates a child's functional auditory skills profile, which lists auditory skills in an integrated and hierarchical order. It has seven hierarchical categories, including sound awareness, meaningful sound, auditory feedback, sound source localizing, auditory discrimination, short-term auditory memory, and linguistic auditory processing. FAPI evaluation allows the therapist to map the child's hearing profile performance, determine the target for increasing the hearing abilities, and develop an effective therapeutic plan. Objective Since the FAPI is an American test, the inventory was adapted for application in the Brazilian population. Material and Methods The translation was done following the steps of translation and back translation, and reproducibility was evaluated. Four translated versions (two originals and two back-translated) were compared, and revisions were done to ensure language adaptation and grammatical and idiomatic equivalence. Results The inventory was duly translated and adapted. Conclusion Further studies about the application of the translated FAPI are necessary to make the test practicable in Brazilian clinical use. PMID:22230992

  4. A mathematical basis for the design and design optimization of adaptive trusses in precision control

    NASA Technical Reports Server (NTRS)

    Das, S. K.; Utku, S.; Chen, G.-S.; Wada, B. K.

    1991-01-01

    A mathematical basis for the optimal design of adaptive trusses to be used in supporting precision equipment is provided. The general theory of adaptive structures is introduced, and the global optimization problem of placing a limited number, q, of actuators, so as to maximally achieve precision control and provide prestress, is stated. Two serialized optimization problems, namely, optimal actuator placement for prestress and optimal actuator placement for precision control, are addressed. In the case of prestressing, the computation of a 'desired' prestress is discussed, the interaction between actuators and redundants in conveying the prestress is shown in its mathematical form, and a methodology for arriving at the optimal placement of actuators and additional redundants is discussed. With regard to precision control, an optimal placement scheme (for q actuators) for maximum 'authority' over the precision points is suggested. The results of the two serialized optimization problems are combined to give a suboptimal solution to the global optimization problem. A method for improving this suboptimal actuator placement scheme by iteration is presented.

  5. The cost of model reference adaptive control - Analysis, experiments, and optimization

    NASA Technical Reports Server (NTRS)

    Messer, R. S.; Haftka, R. T.; Cudney, H. H.

    1993-01-01

    In this paper the performance of Model Reference Adaptive Control (MRAC) is studied in numerical simulations and verified experimentally with the objective of understanding how differences between the plant and the reference model affect the control effort. MRAC is applied analytically and experimentally to a single degree of freedom system and analytically to a MIMO system with controlled differences between the model and the plant. It is shown that the control effort is sensitive to differences between the plant and the reference model. The effects of increased damping in the reference model are considered, and it is shown that requiring the controller to provide increased damping actually decreases the required control effort when differences between the plant and reference model exist. This result is useful because one of the first attempts to counteract the increased control effort due to differences between the plant and reference model might be to require less damping, however, this would actually increase the control effort. Optimization of weighting matrices is shown to help reduce the increase in required control effort. However, it was found that eventually the optimization resulted in a design that required an extremely high sampling rate for successful realization.

  6. Optimal task-dependent changes of bimanual feedback control and adaptation.

    PubMed

    Diedrichsen, Jörn

    2007-10-01

    The control and adaptation of bimanual movements is often considered to be a function of a fixed set of mechanisms [1, 2]. Here, I show that both feedback control and adaptation change optimally with task goals. Participants reached with two hands to two separate spatial targets (two-cursor condition) or used the same bimanual movements to move a cursor presented at the spatial average location of the two hands to a single target (one-cursor condition). A force field was randomly applied to one of the hands. In the two-cursor condition, online corrections occurred only on the perturbed hand, whereas the other movement was controlled independently. In the one-cursor condition, online correction could be detected on both hands as early as 190 ms after the start. These changes can be shown to be optimal in respect to a simple task-dependent cost function [3]. Adaptation, the influence of a perturbation onto the next movement, also depended on task goals. In the two-cursor condition, only the perturbed hand adapted to a force perturbation [2], whereas in the one-cursor condition, both hands adapted. These findings demonstrate that the central nervous system changes bimanual feedback control and adaptation optimally according to the current task requirements. PMID:17900901

  7. Optimal adaptive two-stage designs for early phase II clinical trials.

    PubMed

    Shan, Guogen; Wilding, Gregory E; Hutson, Alan D; Gerstenberger, Shawn

    2016-04-15

    Simon's optimal two-stage design has been widely used in early phase clinical trials for Oncology and AIDS studies with binary endpoints. With this approach, the second-stage sample size is fixed when the trial passes the first stage with sufficient activity. Adaptive designs, such as those due to Banerjee and Tsiatis (2006) and Englert and Kieser (2013), are flexible in the sense that the second-stage sample size depends on the response from the first stage, and these designs are often seen to reduce the expected sample size under the null hypothesis as compared with Simon's approach. An unappealing trait of the existing designs is that they are not associated with a second-stage sample size, which is a non-increasing function of the first-stage response rate. In this paper, an efficient intelligent process, the branch-and-bound algorithm, is used in extensively searching for the optimal adaptive design with the smallest expected sample size under the null, while the type I and II error rates are maintained and the aforementioned monotonicity characteristic is respected. The proposed optimal design is observed to have smaller expected sample sizes compared to Simon's optimal design, and the maximum total sample size of the proposed adaptive design is very close to that from Simon's method. The proposed optimal adaptive two-stage design is recommended for use in practice to improve the flexibility and efficiency of early phase therapeutic development. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26526165

  8. Fuzzy physical programming for Space Manoeuvre Vehicles trajectory optimization based on hp-adaptive pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chai, Runqi; Savvaris, Al; Tsourdos, Antonios

    2016-06-01

    In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.

  9. On Time Delay Margin Estimation for Adaptive Control and Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2011-01-01

    This paper presents methods for estimating time delay margin for adaptive control of input delay systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent an adaptive law by a locally bounded linear approximation within a small time window. The time delay margin of this input delay system represents a local stability measure and is computed analytically by three methods: Pade approximation, Lyapunov-Krasovskii method, and the matrix measure method. These methods are applied to the standard model-reference adaptive control, s-modification adaptive law, and optimal control modification adaptive law. The windowing analysis results in non-unique estimates of the time delay margin since it is dependent on the length of a time window and parameters which vary from one time window to the next. The optimal control modification adaptive law overcomes this limitation in that, as the adaptive gain tends to infinity and if the matched uncertainty is linear, then the closed-loop input delay system tends to a LTI system. A lower bound of the time delay margin of this system can then be estimated uniquely without the need for the windowing analysis. Simulation results demonstrates the feasibility of the bounded linear stability method for time delay margin estimation.

  10. Performance optimization of web-based medical simulation.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2013-01-01

    This paper presents a technique for performance optimization of multimodal interactive web-based medical simulation. A web-based simulation framework is promising for easy access and wide dissemination of medical simulation. However, the real-time performance of the simulation highly depends on hardware capability on the client side. Providing consistent simulation in different hardware is critical for reliable medical simulation. This paper proposes a non-linear mixed integer programming model to optimize the performance of visualization and physics computation while considering hardware capability and application specific constraints. The optimization model identifies and parameterizes the rendering and computing capabilities of the client hardware using an exploratory proxy code. The parameters are utilized to determine the optimized simulation conditions including texture sizes, mesh sizes and canvas resolution. The test results show that the optimization model not only achieves a desired frame per second but also resolves visual artifacts due to low performance hardware. PMID:23400151

  11. An adaptive metamodel-based global optimization algorithm for black-box type problems

    NASA Astrophysics Data System (ADS)

    Jie, Haoxiang; Wu, Yizhong; Ding, Jianwan

    2015-11-01

    In this article, an adaptive metamodel-based global optimization (AMGO) algorithm is presented to solve unconstrained black-box problems. In the AMGO algorithm, a type of hybrid model composed of kriging and augmented radial basis function (RBF) is used as the surrogate model. The weight factors of hybrid model are adaptively selected in the optimization process. To balance the local and global search, a sub-optimization problem is constructed during each iteration to determine the new iterative points. As numerical experiments, six standard two-dimensional test functions are selected to show the distributions of iterative points. The AMGO algorithm is also tested on seven well-known benchmark optimization problems and contrasted with three representative metamodel-based optimization methods: efficient global optimization (EGO), GutmannRBF and hybrid and adaptive metamodel (HAM). The test results demonstrate the efficiency and robustness of the proposed method. The AMGO algorithm is finally applied to the structural design of the import and export chamber of a cycloid gear pump, achieving satisfactory results.

  12. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  13. LIFT: analysis of performance in a laser assisted adaptive optics

    NASA Astrophysics Data System (ADS)

    Plantet, Cedric; Meimon, Serge; Conan, Jean-Marc; Neichel, Benoît; Fusco, Thierry

    2014-08-01

    Laser assisted adaptive optics systems rely on Laser Guide Star (LGS) Wave-Front Sensors (WFS) for high order aberration measurements, and rely on Natural Guide Stars (NGS) WFS to complement the measurements on low orders such as tip-tilt and focus. The sky-coverage of the whole system is therefore related to the limiting magnitude of the NGS WFS. We have recently proposed LIFT, a novel phase retrieval WFS technique, that allows a 1 magnitude gain over the usually used 2×2 Shack-Hartmann WFS. After an in-lab validation, LIFT's concept has been demonstrated on sky in open loop on GeMS (the Gemini Multiconjugate adaptive optics System at Gemini South). To complete its validation, LIFT now needs to be operated in closed loop in a laser assisted adaptive optics system. The present work gives a detailed analysis of LIFT's behavior in presence of high order residuals and how to limit aliasing effects on the tip/tilt/focus estimation. Also, we study the high orders' impact on noise propagation. For this purpose, we simulate a multiconjugate adaptive optics loop representative of a GeMS-like 5 LGS configuration. The residual high orders are derived from a Fourier based simulation. We demonstrate that LIFT keeps a high performance gain over the Shack-Hartmann 2×2 whatever the turbulence conditions. Finally, we show the first simulation of a closed loop with LIFT estimating turbulent tip/tilt and focus residuals that could be induced by sodium layer's altitude variations.

  14. Leaf Area Adjustment As an Optimal Drought-Adaptation Strategy

    NASA Astrophysics Data System (ADS)

    Manzoni, S.; Beyer, F.; Thompson, S. E.; Vico, G.; Weih, M.

    2014-12-01

    Leaf phenology plays a major role in land-atmosphere mass and energy exchanges. Much work has focused on phenological responses to light and temperature, but less to leaf area changes during dry periods. Because the duration of droughts is expected to increase under future climates in seasonally-dry as well as mesic environments, it is crucial to (i) predict drought-related phenological changes and (ii) to develop physiologically-sound models of leaf area dynamics during dry periods. Several optimization criteria have been proposed to model leaf area adjustment as soil moisture decreases. Some theories are based on the plant carbon (C) balance, hypothesizing that leaf area will decline when instantaneous net photosynthetic rates become negative (equivalent to maximization of cumulative C gain). Other theories draw on hydraulic principles, suggesting that leaf area should adjust to either maintain a constant leaf water potential (isohydric behavior) or to avoid leaf water potentials with negative impacts on photosynthesis (i.e., minimization of water stress). Evergreen leaf phenology is considered as a control case. Merging these theories into a unified framework, we quantify the effect of phenological strategy and climate forcing on the net C gain over the entire growing season. By accounting for the C costs of leaf flushing and the gains stemming from leaf photosynthesis, this metric assesses the effectiveness of different phenological strategies, under different climatic scenarios. Evergreen species are favored only when the dry period is relatively short, as they can exploit most of the growing season, and only incur leaf maintenance costs during the short dry period. In contrast, deciduous species that lower maintenance costs by losing leaves are advantaged under drier climates. Moreover, among drought-deciduous species, isohydric behavior leads to lowest C gains. Losing leaves gradually so as to maintain a net C uptake equal to zero during the driest period in

  15. Program optimizations: The interplay between power, performance, and energy

    DOE PAGESBeta

    Leon, Edgar A.; Karlin, Ian; Grant, Ryan E.; Dosanjh, Matthew

    2016-05-16

    Practical considerations for future supercomputer designs will impose limits on both instantaneous power consumption and total energy consumption. Working within these constraints while providing the maximum possible performance, application developers will need to optimize their code for speed alongside power and energy concerns. This paper analyzes the effectiveness of several code optimizations including loop fusion, data structure transformations, and global allocations. A per component measurement and analysis of different architectures is performed, enabling the examination of code optimizations on different compute subsystems. Using an explicit hydrodynamics proxy application from the U.S. Department of Energy, LULESH, we show how code optimizationsmore » impact different computational phases of the simulation. This provides insight for simulation developers into the best optimizations to use during particular simulation compute phases when optimizing code for future supercomputing platforms. Here, we examine and contrast both x86 and Blue Gene architectures with respect to these optimizations.« less

  16. Design, Performance and Optimization for Multimodal Radar Operation

    PubMed Central

    Bhat, Surendra S.; Narayanan, Ram M.; Rangaswamy, Muralidhar

    2012-01-01

    This paper describes the underlying methodology behind an adaptive multimodal radar sensor that is capable of progressively optimizing its range resolution depending upon the target scattering features. It consists of a test-bed that enables the generation of linear frequency modulated waveforms of various bandwidths. This paper discusses a theoretical approach to optimizing the bandwidth used by the multimodal radar. It also discusses the various experimental results obtained from measurement. The resolution predicted from theory agrees quite well with that obtained from experiments for different target arrangements.

  17. Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics

    NASA Astrophysics Data System (ADS)

    Lv, Yongfeng; Na, Jing; Yang, Qinmin; Wu, Xing; Guo, Yu

    2016-01-01

    An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.

  18. Transient analysis of an adaptive system for optimization of design parameters

    NASA Technical Reports Server (NTRS)

    Bayard, D. S.

    1992-01-01

    Averaging methods are applied to analyzing and optimizing the transient response associated with the direct adaptive control of an oscillatory second-order minimum-phase system. The analytical design methods developed for a second-order plant can be applied with some approximation to a MIMO flexible structure having a single dominant mode.

  19. AN OPTIMAL ADAPTIVE LOCAL GRID REFINEMENT APPROACH TO MODELING CONTAMINANT TRANSPORT

    EPA Science Inventory

    A Lagrangian-Eulerian method with an optimal adaptive local grid refinement is used to model contaminant transport equations. pplication of this approach to two bench-mark problems indicates that it completely resolves difficulties of peak clipping, numerical diffusion, and spuri...

  20. Neural Network-Based Adaptive Optimal Controller - A Continuous-Time Formulation

    NASA Astrophysics Data System (ADS)

    Vrabie, Draguna; Lewis, Frank; Levine, Daniel

    We present a new online adaptive control scheme, for partially unknown nonlinear systems, which converges to the optimal state-feedback control solution for affine in the input nonlinear systems. The main features of the algorithm map on the characteristics of the rewards-based decision making process in the mammal brain.

  1. Blocking reduction of Landsat Thematic Mapper JPEG browse images using optimal PSNR estimated spectra adaptive postfiltering

    NASA Technical Reports Server (NTRS)

    Linares, Irving; Mersereau, Russell M.; Smith, Mark J. T.

    1994-01-01

    Two representative sample images of Band 4 of the Landsat Thematic Mapper are compressed with the JPEG algorithm at 8:1, 16:1 and 24:1 Compression Ratios for experimental browsing purposes. We then apply the Optimal PSNR Estimated Spectra Adaptive Postfiltering (ESAP) algorithm to reduce the DCT blocking distortion. ESAP reduces the blocking distortion while preserving most of the image's edge information by adaptively postfiltering the decoded image using the block's spectral information already obtainable from each block's DCT coefficients. The algorithm iteratively applied a one dimensional log-sigmoid weighting function to the separable interpolated local block estimated spectra of the decoded image until it converges to the optimal PSNR with respect to the original using a 2-D steepest ascent search. Convergence is obtained in a few iterations for integer parameters. The optimal logsig parameters are transmitted to the decoder as a negligible byte of overhead data. A unique maxima is guaranteed due to the 2-D asymptotic exponential overshoot shape of the surface generated by the algorithm. ESAP is based on a DFT analysis of the DCT basis functions. It is implemented with pixel-by-pixel spatially adaptive separable FIR postfilters. PSNR objective improvements between 0.4 to 0.8 dB are shown together with their corresponding optimal PSNR adaptive postfiltered images.

  2. Optimal control of gene expression for fast proteome adaptation to environmental change.

    PubMed

    Pavlov, Michael Y; Ehrenberg, Måns

    2013-12-17

    Bacterial populations growing in a changing world must adjust their proteome composition in response to alterations in the environment. Rapid proteome responses to growth medium changes are expected to increase the average growth rate and fitness value of these populations. Little is known about the dynamics of proteome change, e.g., whether bacteria use optimal strategies of gene expression for rapid proteome adjustments and if there are lower bounds to the time of proteome adaptation in response to growth medium changes. To begin answering these types of questions, we modeled growing bacteria as stoichiometrically coupled networks of metabolic pathways. These are balanced during steady-state growth in a constant environment but are initially unbalanced after rapid medium shifts due to a shortage of enzymes required at higher concentrations in the new environment. We identified an optimal strategy for rapid proteome adjustment in the absence of protein degradation and found a lower bound to the time of proteome adaptation after medium shifts. This minimal time is determined by the ratio between the Kullback-Leibler distance from the pre- to the postshift proteome and the postshift steady-state growth rate. The dynamics of optimally controlled proteome adaptation has a simple analytical solution. We used detailed numerical modeling to demonstrate that realistic bacterial control systems can emulate this optimal strategy for rapid proteome adaptation. Our results may provide a conceptual link between the physiology and population genetics of growing bacteria. PMID:24297927

  3. Organ sample generator for expected treatment dose construction and adaptive inverse planning optimization

    SciTech Connect

    Nie Xiaobo; Liang Jian; Yan Di

    2012-12-15

    Purpose: To create an organ sample generator (OSG) for expected treatment dose construction and adaptive inverse planning optimization. The OSG generates random samples of organs of interest from a distribution obeying the patient specific organ variation probability density function (PDF) during the course of adaptive radiotherapy. Methods: Principle component analysis (PCA) and a time-varying least-squares regression (LSR) method were used on patient specific geometric variations of organs of interest manifested on multiple daily volumetric images obtained during the treatment course. The construction of the OSG includes the determination of eigenvectors of the organ variation using PCA, and the determination of the corresponding coefficients using time-varying LSR. The coefficients can be either random variables or random functions of the elapsed treatment days depending on the characteristics of organ variation as a stationary or a nonstationary random process. The LSR method with time-varying weighting parameters was applied to the precollected daily volumetric images to determine the function form of the coefficients. Eleven h and n cancer patients with 30 daily cone beam CT images each were included in the evaluation of the OSG. The evaluation was performed using a total of 18 organs of interest, including 15 organs at risk and 3 targets. Results: Geometric variations of organs of interest during h and n cancer radiotherapy can be represented using the first 3 {approx} 4 eigenvectors. These eigenvectors were variable during treatment, and need to be updated using new daily images obtained during the treatment course. The OSG generates random samples of organs of interest from the estimated organ variation PDF of the individual. The accuracy of the estimated PDF can be improved recursively using extra daily image feedback during the treatment course. The average deviations in the estimation of the mean and standard deviation of the organ variation PDF for h

  4. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  5. Discrete-time entropy formulation of optimal and adaptive control problems

    NASA Technical Reports Server (NTRS)

    Tsai, Yweting A.; Casiello, Francisco A.; Loparo, Kenneth A.

    1992-01-01

    The discrete-time version of the entropy formulation of optimal control of problems developed by G. N. Saridis (1988) is discussed. Given a dynamical system, the uncertainty in the selection of the control is characterized by the probability distribution (density) function which maximizes the total entropy. The equivalence between the optimal control problem and the optimal entropy problem is established, and the total entropy is decomposed into a term associated with the certainty equivalent control law, the entropy of estimation, and the so-called equivocation of the active transmission of information from the controller to the estimator. This provides a useful framework for studying the certainty equivalent and adaptive control laws.

  6. Business owners' optimism and business performance after a natural disaster.

    PubMed

    Bronson, James W; Faircloth, James B; Valentine, Sean R

    2006-12-01

    Previous work indicates that individuals' optimism is related to superior performance in adverse situations. This study examined correlations after flooding for measures of business recovery but found only weak support (very small common variance) for business owners' optimism scores and sales recovery. Using traditional measures of recovery, in this study was little empirical evidence that optimism would be of value in identifying businesses at risk after a natural disaster. PMID:17305221

  7. Optimal modified tracking performance for MIMO systems under bandwidth constraint.

    PubMed

    Sun, Xin-Xiang; Wu, Jie; Zhan, Xi-Sheng; Han, Tao

    2016-05-01

    This paper investigates the optimal modified tracking performance of multi-input multi-output (MIMO) networked control systems (NCSs) with bandwidth and channel noise constraints. A new modified tracking performance index is proposed which prevents variations in the tracking error from leading to invalid data where there is no integrator in the plant. An expression for the optimal modified tracking performance is obtained using a method which includes co-prime factorization, partial factorization, spectral decomposition and H2 norm. The obtained results show that the optimal modified tracking performance is influenced by the non-minimum phase (NMP) zeros, unstable poles, and their directions. Furthermore, the characteristics of the input signal, the modification factor, the bandwidth and the channel noise are also shown to be closely related to the optimal modified tracking performance. Finally, the efficiency of the result is verified using some typical examples. PMID:26874745

  8. A self adaptive hybrid enhanced artificial bee colony algorithm for continuous optimization problems.

    PubMed

    Shan, Hai; Yasuda, Toshiyuki; Ohkura, Kazuhiro

    2015-06-01

    The artificial bee colony (ABC) algorithm is one of popular swarm intelligence algorithms that inspired by the foraging behavior of honeybee colonies. To improve the convergence ability, search speed of finding the best solution and control the balance between exploration and exploitation using this approach, we propose a self adaptive hybrid enhanced ABC algorithm in this paper. To evaluate the performance of standard ABC, best-so-far ABC (BsfABC), incremental ABC (IABC), and the proposed ABC algorithms, we implemented numerical optimization problems based on the IEEE Congress on Evolutionary Computation (CEC) 2014 test suite. Our experimental results show the comparative performance of standard ABC, BsfABC, IABC, and the proposed ABC algorithms. According to the results, we conclude that the proposed ABC algorithm is competitive to those state-of-the-art modified ABC algorithms such as BsfABC and IABC algorithms based on the benchmark problems defined by CEC 2014 test suite with dimension sizes of 10, 30, and 50, respectively. PMID:25982071

  9. Optimization of reactor network design problem using Jumping Gene Adaptation of Differential Evolution

    NASA Astrophysics Data System (ADS)

    Gujarathi, Ashish M.; Purohit, S.; Srikanth, B.

    2015-06-01

    Detailed working principle of jumping gene adaptation of differential evolution (DE-JGa) is presented. The performance of the DE-JGa algorithm is compared with the performance of differential evolution (DE) and modified DE (MDE) by applying these algorithms on industrial problems. In this study Reactor network design (RND) problem is solved using DE, MDE, and DE-JGa algorithms: These industrial processes are highly nonlinear and complex with reference to optimal operating conditions with many equality and inequality constraints. Extensive computational comparisons have been made for all the chemical engineering problems considered. The results obtained in the present study show that DE-JGa algorithm outperforms the other algorithms (DE and MDE). Several comparisons are made among the algorithms with regard to the number of function evaluations (NFE)/CPU- time required to find the global optimum. The standard deviation and the variance values obtained using DE-JGa, DE and MDE algorithms also show that the DE-JGa algorithm gives consistent set of results for the majority of the test problems and the industrial real world problems.

  10. An optimized adaptive optics experimental setup for in vivo retinal imaging

    NASA Astrophysics Data System (ADS)

    Balderas-Mata, S. E.; Valdivieso González, L. G.; Ramírez Zavaleta, G.; López Olazagasti, E.; Tepichin Rodriguez, E.

    2012-10-01

    The use of Adaptive Optics (AO) in ophthalmologic instruments to image human retinas has been probed to improve the imaging lateral resolution, by correcting both static and dynamic aberrations inherent in human eyes. Typically, the configuration of the AO arm uses an infrared beam from a superluminescent diode (SLD), which is focused on the retina, acting as a point source. The back reflected light emerges through the eye optical system bringing with it the aberrations of the cornea. The aberrated wavefront is measured with a Shack - Hartmann wavefront sensor (SHWFS). However, the aberrations in the optical imaging system can reduced the performance of the wave front correction. The aim of this work is to present an optimized first stage AO experimental setup for in vivo retinal imaging. In our proposal, the imaging optical system has been designed in order to reduce spherical aberrations due to the lenses. The ANSI Standard is followed assuring the safety power levels. The performance of the system will be compared with a commercial aberrometer. This system will be used as the AO arm of a flood-illuminated fundus camera system for retinal imaging. We present preliminary experimental results showing the enhancement.

  11. Hypnotizability and Performance on a Prism Adaptation Test.

    PubMed

    Menzocchi, Manuel; Mecacci, Giulio; Zeppi, Andrea; Carli, Giancarlo; Santarcangelo, Enrica L

    2015-12-01

    The susceptibility to hypnosis, which can be measured by scales, is not merely a cognitive trait. In fact, it is associated with a number of physiological correlates in the ordinary state of consciousness and in the absence of suggestions. The hypnotizability-related differences observed in sensorimotor integration suggested a major role of the cerebellum in the peculiar performance of healthy subjects with high scores of hypnotic susceptibility (highs). In order to provide behavioral evidence of this hypothesis, we submitted 20 highs and 21 low hypnotizable participants (lows) to the classical cerebellar Prism Adaptation Test (PAT). We found that the highs' performance was significantly less accurate and more variable than the lows' one, even though the two groups shared the same characteristics of adaptation to prismatic lenses. Although further studies are required to interpret these findings, they could account for earlier reports of hypnotizability-related differences in postural control and blink rate, as they indicate that hypnotizability influences the cerebellar control of sensorimotor integration. PMID:25913127

  12. Optimizing Hydronic System Performance in Residential Applications

    SciTech Connect

    2013-10-01

    Even though new homes constructed with hydronic heat comprise only 3% of the market (US Census Bureau 2009), of the 115 million existing homes in the United States, almost 14 million of those homes (11%) are heated with steam or hot water systems according to 2009 US Census data. Therefore, improvements in hydronic system performance could result in significant energy savings in the US.

  13. Using perioperative analytics to optimize OR performance.

    PubMed

    Rempfer, Doug

    2015-06-01

    In the past, the data hospitals gleaned from operating rooms (ORs) tended to be static and lacking in actionable information. Hospitals can improve OR performance by applying OR analytics, such as evaluation of turnover times and expenses, which provide useful intelligence. Having the information is important, but success depends on aligning staff behavior to effectively achieve improvement strategies identified using the analytics. PMID:26665339

  14. System reliability, performance and trust in adaptable automation.

    PubMed

    Chavaillaz, Alain; Wastell, David; Sauer, Jürgen

    2016-01-01

    The present study examined the effects of reduced system reliability on operator performance and automation management in an adaptable automation environment. 39 operators were randomly assigned to one of three experimental groups: low (60%), medium (80%), and high (100%) reliability of automation support. The support system provided five incremental levels of automation which operators could freely select according to their needs. After 3 h of training on a simulated process control task (AutoCAMS) in which the automation worked infallibly, operator performance and automation management were measured during a 2.5-h testing session. Trust and workload were also assessed through questionnaires. Results showed that although reduced system reliability resulted in lower levels of trust towards automation, there were no corresponding differences in the operators' reliance on automation. While operators showed overall a noteworthy ability to cope with automation failure, there were, however, decrements in diagnostic speed and prospective memory with lower reliability. PMID:26360226

  15. Adaptive optimal control of highly dissipative nonlinear spatially distributed processes with neuro-dynamic programming.

    PubMed

    Luo, Biao; Wu, Huai-Ning; Li, Han-Xiong

    2015-04-01

    Highly dissipative nonlinear partial differential equations (PDEs) are widely employed to describe the system dynamics of industrial spatially distributed processes (SDPs). In this paper, we consider the optimal control problem of the general highly dissipative SDPs, and propose an adaptive optimal control approach based on neuro-dynamic programming (NDP). Initially, Karhunen-Loève decomposition is employed to compute empirical eigenfunctions (EEFs) of the SDP based on the method of snapshots. These EEFs together with singular perturbation technique are then used to obtain a finite-dimensional slow subsystem of ordinary differential equations that accurately describes the dominant dynamics of the PDE system. Subsequently, the optimal control problem is reformulated on the basis of the slow subsystem, which is further converted to solve a Hamilton-Jacobi-Bellman (HJB) equation. HJB equation is a nonlinear PDE that has proven to be impossible to solve analytically. Thus, an adaptive optimal control method is developed via NDP that solves the HJB equation online using neural network (NN) for approximating the value function; and an online NN weight tuning law is proposed without requiring an initial stabilizing control policy. Moreover, by involving the NN estimation error, we prove that the original closed-loop PDE system with the adaptive optimal control policy is semiglobally uniformly ultimately bounded. Finally, the developed method is tested on a nonlinear diffusion-convection-reaction process and applied to a temperature cooling fin of high-speed aerospace vehicle, and the achieved results show its effectiveness. PMID:25794375

  16. An auto-adaptive optimization approach for targeting nonpoint source pollution control practices

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Wei, Guoyuan; Shen, Zhenyao

    2015-10-01

    To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.

  17. Adaptive optimal stochastic state feedback control of resistive wall modes in tokamaks

    SciTech Connect

    Sun, Z.; Sen, A.K.; Longman, R.W.

    2006-01-15

    An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least-square method with exponential forgetting factor and covariance resetting is used to identify (experimentally determine) the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time-dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.

  18. Optimal performance of reciprocating demagnetization quantum refrigerators

    NASA Astrophysics Data System (ADS)

    Kosloff, Ronnie; Feldmann, Tova

    2010-07-01

    A reciprocating quantum refrigerator is studied with the purpose of determining the limitations of cooling to absolute zero. The cycle is based on demagnetization and magnetization of a working medium. We find that if the energy spectrum of the working medium possesses an uncontrollable gap, and in addition there is noise on the controls, then there is a minimum achievable temperature above zero. The reason is that even a negligible amount of noise prevents adiabatic following during the demagnetization stage. This results with a minimum temperature, Tc(min)>0 , which scales with the energy gap. The refrigerator is based on an Otto cycle where the working medium is an interacting spin system with an energy gap. For this system the external control Hamiltonian does not commute with the internal interaction. As a result during the demagnetization and magnetization segments of the operating cycle the system cannot follow adiabatically the temporal change in the energy levels. We connect the nonadiabatic dynamics to quantum friction. An adiabatic measure is defined characterizing the rate of change of the Hamiltonian. Closed-form solutions are found for a constant adiabatic measure for all the cycle segments. We have identified a family of quantized frictionless cycles with increasing cycle times. These cycles minimize the entropy production. Such frictionless cycles are able to cool to Tc=0 . External noise on the controls eliminates these frictionless cycles. The influence of phase and amplitude noise on the demagnetization and magnetization segments is explicitly derived. An extensive numerical study of optimal cooling cycles was carried out which showed that at sufficiently low temperature the noise always dominated restricting the minimum temperature.

  19. [Optimalization of rate adaptation using Holter functions in DDD/R pacemakers].

    PubMed

    Novotný, T; Dvorák, R; Kozák, M; Vlasínová, J

    1998-06-01

    Introduction of the pacing rate adaptation according to the momentary metabolic needs added other programmable parametres which demand physician's attention during the initial postimplantation programmation and also in follow-up of pacemaker patients. The parametres setting is strictly individual with a need of feedback control. In some devices it is enabled by Holter functions as a part of pacemaker software. These methods were used to set the rate adaptive parametres in the group of 23 patients with implanted DDD/R pacemaker. The walking stress test was used. Model follow-up situations are presented in 3 case reports. Using Holter functions enables the physician to put patient's subjective complains in relation with actual heart rate--this is used to optimize the parametres of rate adaptation. The authors consider the Holter functions a necessary part of rate adaptive pacemaker software. PMID:9820057

  20. Adaptive Postural Control for Joint Immobilization during Multitask Performance

    PubMed Central

    Hsu, Wei-Li

    2014-01-01

    Motor abundance is an essential feature of adaptive control. The range of joint combinations enabled by motor abundance provides the body with the necessary freedom to adopt different positions, configurations, and movements that allow for exploratory postural behavior. This study investigated the adaptation of postural control to joint immobilization during multi-task performance. Twelve healthy volunteers (6 males and 6 females; 21–29 yr) without any known neurological deficits, musculoskeletal conditions, or balance disorders participated in this study. The participants executed a targeting task, alone or combined with a ball-balancing task, while standing with free or restricted joint motions. The effects of joint configuration variability on center of mass (COM) stability were examined using uncontrolled manifold (UCM) analysis. The UCM method separates joint variability into two components: the first is consistent with the use of motor abundance, which does not affect COM position (VUCM); the second leads to COM position variability (VORT). The analysis showed that joints were coordinated such that their variability had a minimal effect on COM position. However, the component of joint variability that reflects the use of motor abundance to stabilize COM (VUCM) was significant decreased when the participants performed the combined task with immobilized joints. The component of joint variability that leads to COM variability (VORT) tended to increase with a reduction in joint degrees of freedom. The results suggested that joint immobilization increases the difficulty of stabilizing COM when multiple tasks are performed simultaneously. These findings are important for developing rehabilitation approaches for patients with limited joint movements. PMID:25329477

  1. The optimally performing Fischer-Tropsch catalyst.

    PubMed

    Filot, Ivo A W; van Santen, Rutger A; Hensen, Emiel J M

    2014-11-17

    Microkinetics simulations are presented based on DFT-determined elementary reaction steps of the Fischer-Tropsch (FT) reaction. The formation of long-chain hydrocarbons occurs on stepped Ru surfaces with CH as the inserting monomer, whereas planar Ru only produces methane because of slow CO activation. By varying the metal-carbon and metal-oxygen interaction energy, three reactivity regimes are identified with rates being controlled by CO dissociation, chain-growth termination, or water removal. Predicted surface coverages are dominated by CO, C, or O, respectively. Optimum FT performance occurs at the interphase of the regimes of limited CO dissociation and chain-growth termination. Current FT catalysts are suboptimal, as they are limited by CO activation and/or O removal. PMID:25168456

  2. Performance assessment of MEMS adaptive optics in tactical airborne systems

    NASA Astrophysics Data System (ADS)

    Tyson, Robert K.

    1999-09-01

    Tactical airborne electro-optical systems are severely constrained by weight, volume, power, and cost. Micro- electrical-mechanical adaptive optics provide a solution that addresses the engineering realities without compromising spatial and temporal compensation requirements. Through modeling and analysis, we determined that substantial benefits could be gained for laser designators, ladar, countermeasures, and missile seekers. The developments potential exists for improving seeker imagery resolution 20 percent, extending countermeasures keep-out range by a factor of 5, doubling the range for ladar detection and identification, and compensating for supersonic and hypersonic aircraft boundary layers. Innovative concepts are required for atmospheric pat hand boundary layer compensation. We have developed design that perform these tasks using high speed scene-based wavefront sensing, IR aerosol laser guide stars, and extended-object wavefront beacons. We have developed a number of adaptive optics system configurations that met the spatial resolution requirements and we have determined that sensing and signal processing requirements can be met. With the help of micromachined deformable mirrors and sensor, we will be able to integrate the systems into existing airborne pods and missiles as well as next generation electro-optical systems.

  3. The Differentiation of Adaptive Behaviours: Evidence from High and Low Performers

    ERIC Educational Resources Information Center

    Kane, Harrison; Oakland, Thomas David

    2015-01-01

    Background: Professionals who use measures of adaptive behaviour when working with special populations may assume that adaptive behaviour is a consistent and linear construct at various ability levels and thus believe the construct of adaptive behaviour is the same for high and low performers. That is, highly adaptive people simply are assumed to…

  4. Adaptive tracking and compensation of laser spot based on ant colony optimization

    NASA Astrophysics Data System (ADS)

    Yang, Lihong; Ke, Xizheng; Bai, Runbing; Hu, Qidi

    2009-05-01

    Because the effect of atmospheric scattering and atmospheric turbulence on laser signal of atmospheric absorption,laser spot twinkling, beam drift and spot split-up occur ,when laser signal transmits in the atmospheric channel. The phenomenon will be seriously affects the stability and the reliability of laser spot receiving system. In order to reduce the influence of atmospheric turbulence, we adopt optimum control thoughts in the field of artificial intelligence, propose a novel adaptive optical control technology-- model-free optimized adaptive control technology, analyze low-order pattern wave-front error theory, in which an -adaptive optical system is employed to adjust errors, and design its adaptive structure system. Ant colony algorithm is the control core algorithm, which is characteristic of positive feedback, distributed computing and greedy heuristic search. . The ant colony algorithm optimization of adaptive optical phase compensation is simulated. Simulation result shows that, the algorithm can effectively control laser energy distribution, improve laser light beam quality, and enhance signal-to-noise ratio of received signal.

  5. Ring cusp discharge chamber performance optimization

    NASA Technical Reports Server (NTRS)

    Hiatt, J. M.; Wilbur, P. J.

    1985-01-01

    An experimental study of the effects of discharge chamber length and the locations of the anode, cathode and ring cusp within the chamber on the performance of an 8 cm dia. ring cusp thruster is described. As these lengths and positions are varied the changes induced in plasma ion energy costs, extracted ion fractions and ion beam profiles are measured. Results show that the anode may be positioned at any location along an 'optimum virtual anode' magnetic field line and minimum plasma ion energy costs will result. The actual location of this field line is related to a 'virtual cathode' magnetic field line that is defined by the cathode position. The magnetic field has to be such that the virtual anode field line intersects the grids at the outermost ring of grid holes to maximize the extracted ion fraction and flatten the ion beam profile. Discharge chamber lengths that were as small as possible in the test apparatus yielded the lowest extracted ion fractions.

  6. Performance optimization in electric field gradient focusing.

    PubMed

    Sun, Xuefei; Farnsworth, Paul B; Tolley, H Dennis; Warnick, Karl F; Woolley, Adam T; Lee, Milton L

    2009-01-01

    Electric field gradient focusing (EFGF) is a technique used to simultaneously separate and concentrate biomacromolecules, such as proteins, based on the opposing forces of an electric field gradient and a hydrodynamic flow. Recently, we reported EFGF devices fabricated completely from copolymers functionalized with poly(ethylene glycol), which display excellent resistance to protein adsorption. However, the previous devices did not provide the predicted linear electric field gradient and stable current. To improve performance, Tris-HCl buffer that was previously doped in the hydrogel was replaced with a phosphate buffer containing a salt (i.e., potassium chloride, KCl) with high mobility ions. The new devices exhibited stable current, good reproducibility, and a linear electric field distribution in agreement with the shaped gradient region design due to improved ion transport in the hydrogel. The field gradient was calculated based on theory to be approximately 5.76 V/cm(2) for R-phycoerythrin when the applied voltage was 500 V. The effect of EFGF separation channel dimensions was also investigated; a narrower focused band was achieved in a smaller diameter channel. The relationship between the bandwidth and channel diameter is consistent with theory. Three model proteins were resolved in an EFGF channel of this design. The improved device demonstrated 14,000-fold concentration of a protein sample (from 2 ng/mL to 27 microg/mL). PMID:19081099

  7. Use Alkalinity Monitoring to Optimize Bioreactor Performance.

    PubMed

    Jones, Christopher S; Kult, Keegan J

    2016-05-01

    In recent years, the agricultural community has reduced flow of nitrogen from farmed landscapes to stream networks through the use of woodchip denitrification bioreactors. Although deployment of this practice is becoming more common to treat high-nitrate water from agricultural drainage pipes, information about bioreactor management strategies is sparse. This study focuses on the use of water monitoring, and especially the use of alkalinity monitoring, in five Iowa woodchip bioreactors to provide insights into and to help manage bioreactor chemistry in ways that will produce desirable outcomes. Results reported here for the five bioreactors show average annual nitrate load reductions between 50 and 80%, which is acceptable according to established practice standards. Alkalinity data, however, imply that nitrous oxide formation may have regularly occurred in at least three of the bioreactors that are considered to be closed systems. Nitrous oxide measurements of influent and effluent water provide evidence that alkalinity may be an important indicator of bioreactor performance. Bioreactor chemistry can be managed by manipulation of water throughput in ways that produce adequate nitrate removal while preventing undesirable side effects. We conclude that (i) water should be retained for longer periods of time in bioreactors where nitrous oxide formation is indicated, (ii) measuring only nitrate and sulfate concentrations is insufficient for proper bioreactor operation, and (iii) alkalinity monitoring should be implemented into protocols for bioreactor management. PMID:27136151

  8. Performance of the Keck Observatory adaptive optics system

    SciTech Connect

    van Dam, M A; Mignant, D L; Macintosh, B A

    2004-01-19

    In this paper, the adaptive optics (AO) system at the W.M. Keck Observatory is characterized. The authors calculate the error budget of the Keck AO system operating in natural guide star mode with a near infrared imaging camera. By modeling the control loops and recording residual centroids, the measurement noise and band-width errors are obtained. The error budget is consistent with the images obtained. Results of sky performance tests are presented: the AO system is shown to deliver images with average Strehl ratios of up to 0.37 at 1.58 {micro}m using a bright guide star and 0.19 for a magnitude 12 star.

  9. Development of Refrigeration Hermetic Compressors Adapt to Starting Performance

    NASA Astrophysics Data System (ADS)

    Matsushima, Masatoshi; Nomura, Tomohiro; Murata, Mitsuru

    Motors that occupy the most part of refrigerating hermetic compressors must be small sized, lightened, high efficient and reducted costs. To achieve these objects, we need to investigate torque of compressors at the starting time and develop new motors with torque adapt to it. In this report, we research on high temperature reciprocating compressors that begin to rotate in the condition of pressure balanced and that torque of one rotation sharply fluctuates. We measure pressure fluctuation inside the cylinder and rotational speed of motors from beginning to rotate to full speed. After that we calculate torque of compressors that is, torque necessary to motors. As a result, we put to use condenser run motors useless starting condenser and voltage relay. Eventually we could develop compressors with better starting performance, high efficiency, small size, light weight and cost reduction.

  10. Control Systems with Normalized and Covariance Adaptation by Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T. (Inventor); Burken, John J. (Inventor); Hanson, Curtis E. (Inventor)

    2016-01-01

    Disclosed is a novel adaptive control method and system called optimal control modification with normalization and covariance adjustment. The invention addresses specifically to current challenges with adaptive control in these areas: 1) persistent excitation, 2) complex nonlinear input-output mapping, 3) large inputs and persistent learning, and 4) the lack of stability analysis tools for certification. The invention has been subject to many simulations and flight testing. The results substantiate the effectiveness of the invention and demonstrate the technical feasibility for use in modern aircraft flight control systems.

  11. Adaptive controller for dynamic power and performance management in the virtualized computing systems.

    PubMed

    Wen, Chengjian; Long, Xiang; Mu, Yifen

    2013-01-01

    Power and performance management problem in large scale computing systems like data centers has attracted a lot of interests from both enterprises and academic researchers as power saving has become more and more important in many fields. Because of the multiple objectives, multiple influential factors and hierarchical structure in the system, the problem is indeed complex and hard. In this paper, the problem will be investigated in a virtualized computing system. Specifically, it is formulated as a power optimization problem with some constraints on performance. Then, the adaptive controller based on least-square self-tuning regulator(LS-STR) is designed to track performance in the first step; and the resource solved by the controller is allocated in order to minimize the power consumption as the second step. Some simulations are designed to test the effectiveness of this method and to compare it with some other controllers. The simulation results show that the adaptive controller is generally effective: it is applicable for different performance metrics, for different workloads, and for single and multiple workloads; it can track the performance requirement effectively and save the power consumption significantly. PMID:23451241

  12. Adaptive Urban Stormwater Management Using a Two-stage Stochastic Optimization Model

    NASA Astrophysics Data System (ADS)

    Hung, F.; Hobbs, B. F.; McGarity, A. E.

    2014-12-01

    In many older cities, stormwater results in combined sewer overflows (CSOs) and consequent water quality impairments. Because of the expense of traditional approaches for controlling CSOs, cities are considering the use of green infrastructure (GI) to reduce runoff and pollutants. Examples of GI include tree trenches, rain gardens, green roofs, and rain barrels. However, the cost and effectiveness of GI are uncertain, especially at the watershed scale. We present a two-stage stochastic extension of the Stormwater Investment Strategy Evaluation (StormWISE) model (A. McGarity, JWRPM, 2012, 111-24) to explicitly model and optimize these uncertainties in an adaptive management framework. A two-stage model represents the immediate commitment of resources ("here & now") followed by later investment and adaptation decisions ("wait & see"). A case study is presented for Philadelphia, which intends to extensively deploy GI over the next two decades (PWD, "Green City, Clean Water - Implementation and Adaptive Management Plan," 2011). After first-stage decisions are made, the model updates the stochastic objective and constraints (learning). We model two types of "learning" about GI cost and performance. One assumes that learning occurs over time, is automatic, and does not depend on what has been done in stage one (basic model). The other considers learning resulting from active experimentation and learning-by-doing (advanced model). Both require expert probability elicitations, and learning from research and monitoring is modelled by Bayesian updating (as in S. Jacobi et al., JWRPM, 2013, 534-43). The model allocates limited financial resources to GI investments over time to achieve multiple objectives with a given reliability. Objectives include minimizing construction and O&M costs; achieving nutrient, sediment, and runoff volume targets; and community concerns, such as aesthetics, CO2 emissions, heat islands, and recreational values. CVaR (Conditional Value at Risk) and

  13. Seed vigour and crop establishment: extending performance beyond adaptation.

    PubMed

    Finch-Savage, W E; Bassel, G W

    2016-02-01

    Seeds are central to crop production, human nutrition, and food security. A key component of the performance of crop seeds is the complex trait of seed vigour. Crop yield and resource use efficiency depend on successful plant establishment in the field, and it is the vigour of seeds that defines their ability to germinate and establish seedlings rapidly, uniformly, and robustly across diverse environmental conditions. Improving vigour to enhance the critical and yield-defining stage of crop establishment remains a primary objective of the agricultural industry and the seed/breeding companies that support it. Our knowledge of the regulation of seed germination has developed greatly in recent times, yet understanding of the basis of variation in vigour and therefore seed performance during the establishment of crops remains limited. Here we consider seed vigour at an ecophysiological, molecular, and biomechanical level. We discuss how some seed characteristics that serve as adaptive responses to the natural environment are not suitable for agriculture. Past domestication has provided incremental improvements, but further actively directed change is required to produce seeds with the characteristics required both now and in the future. We discuss ways in which basic plant science could be applied to enhance seed performance in crop production. PMID:26585226

  14. Performance optimization of scientific applications on emerging architectures

    NASA Astrophysics Data System (ADS)

    Dursun, Hikmet

    The shift to many-core architecture design paradigm in computer market has provided unprecedented computational capabilities. This also marks the end of the free-ride era---scientific software must now evolve with new chips. Hence, it is of great importance to develop large legacy-code optimization frameworks to achieve an optimal system architecture-algorithm mapping that maximizes processor utilization and thereby achieves higher application performance. To address this challenge, this thesis studies and develops scalable algorithms for leveraging many-core resources optimally to improve the performance of massively parallel scientific applications. This work presents a systematic approach to optimize scientific codes on emerging architectures, which consists of three major steps: (1) Develop a performance profiling framework to identify application performance bottlenecks on clusters of emerging architectures; (2) explore common algorithmic kernels in a suite of real world scientific applications and develop performance tuning strategies to provide insight into how to maximally utilize underlying hardware; and (3) unify experience in performance optimization to develop a top-down optimization framework for the optimization of scientific applications on emerging high-performance computing platforms. This thesis makes the following contributions. First, we have designed and implemented a performance analysis methodology for Cell-accelerated clusters. Two parallel scientific applications---lattice Boltzmann (LB) flow simulation and atomistic molecular dynamics (MD) simulation---are analyzed and valuable performance insights are gained on a Cell processor based PlayStation3 cluster as well as a hybrid Opteron+Cell based cluster similar to the design of Roadrunner---the first petaflop supercomputer of the world. Second, we have developed a novel parallelization framework for finite-difference time-domain applications. The approach is validated in a seismic

  15. Space Partitioning Evolutionary Many-Objective Optimization: Performance Analysis on MNK-Landscapes

    NASA Astrophysics Data System (ADS)

    Aguirre, Hernán; Tanaka, Kiyoshi

    This work proposes space partitioning, a new approach to evolutionary many-objective optimization. The proposed approach instantaneously partitions the objective space into subspaces and concurrently searches in each subspace. A partition strategy is used to define a schedule of subspace sampling, so that different subspaces can be emphasized at different generations. Space partitioning is implemented with adaptive epsilon-ranking, a procedure that re-ranks solutions in each subspace giving selective advantage to a subset of well distributed solutions chosen from the set of solutions initially assigned rank-1 in the high dimensional objective space. Adaptation works to keep the actual number of rank-1 solutions in each subspace close to a desired number. The effects on performance of space partitioning are verified on MNK-Landscapes. Also, a comparison with two substitute distance assignment methods recently proposed for many-objective optimization is included.

  16. Application of an optimization method to high performance propeller designs

    NASA Technical Reports Server (NTRS)

    Li, K. C.; Stefko, G. L.

    1984-01-01

    The application of an optimization method to determine the propeller blade twist distribution which maximizes propeller efficiency is presented. The optimization employs a previously developed method which has been improved to include the effects of blade drag, camber and thickness. Before the optimization portion of the computer code is used, comparisons of calculated propeller efficiencies and power coefficients are made with experimental data for one NACA propeller at Mach numbers in the range of 0.24 to 0.50 and another NACA propeller at a Mach number of 0.71 to validate the propeller aerodynamic analysis portion of the computer code. Then comparisons of calculated propeller efficiencies for the optimized and the original propellers show the benefits of the optimization method in improving propeller performance. This method can be applied to the aerodynamic design of propellers having straight, swept, or nonplanar propeller blades.

  17. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  18. Integrated optimal allocation model for complex adaptive system of water resources management (II): Case study

    NASA Astrophysics Data System (ADS)

    Zhou, Yanlai; Guo, Shenglian; Xu, Chong-Yu; Liu, Dedi; Chen, Lu; Wang, Dong

    2015-12-01

    Climate change, rapid economic development and increase of the human population are considered as the major triggers of increasing challenges for water resources management. This proposed integrated optimal allocation model (IOAM) for complex adaptive system of water resources management is applied in Dongjiang River basin located in the Guangdong Province of China. The IOAM is calibrated and validated under baseline period 2010 year and future period 2011-2030 year, respectively. The simulation results indicate that the proposed model can make a trade-off between demand and supply for sustainable development of society, economy, ecology and environment and achieve adaptive management of water resources allocation. The optimal scheme derived by multi-objective evaluation is recommended for decision-makers in order to maximize the comprehensive benefits of water resources management.

  19. Crop Classification by Forward Neural Network with Adaptive Chaotic Particle Swarm Optimization

    PubMed Central

    Zhang, Yudong; Wu, Lenan

    2011-01-01

    This paper proposes a hybrid crop classifier for polarimetric synthetic aperture radar (SAR) images. The feature sets consisted of span image, the H/A/α decomposition, and the gray-level co-occurrence matrix (GLCM) based texture features. Then, the features were reduced by principle component analysis (PCA). Finally, a two-hidden-layer forward neural network (NN) was constructed and trained by adaptive chaotic particle swarm optimization (ACPSO). K-fold cross validation was employed to enhance generation. The experimental results on Flevoland sites demonstrate the superiority of ACPSO to back-propagation (BP), adaptive BP (ABP), momentum BP (MBP), Particle Swarm Optimization (PSO), and Resilient back-propagation (RPROP) methods. Moreover, the computation time for each pixel is only 1.08 × 10−7 s. PMID:22163872

  20. Time reversal versus adaptive optimization for spatiotemporal nanolocalization in a random nanoantenna

    NASA Astrophysics Data System (ADS)

    Differt, Dominik; Hensen, Matthias; Pfeiffer, Walter

    2016-05-01

    Spatiotemporal nanolocalization of ultrashort pulses in a random scattering nanostructure via time reversal and adaptive optimization employing a genetic algorithm and a suitably defined fitness function is studied for two embedded nanoparticles that are separated by only a tenth of the free space wavelength. The nanostructure is composed of resonant core-shell nanoparticles (TiO2 core and Ag shell) placed randomly surrounding these two nanoparticles acting as targets. The time reversal scheme achieves selective nanolocalization only by chance if the incident radiation can couple efficiently to dipolar local modes interacting with the target/emitter particle. Even embedding the structure in a reverberation chamber fails improving the nanolocalization. In contrast, the adaptive optimization strategy reliably yields nanolocalization of the radiation and allows a highly selective excitation of either target position. This demonstrates that random scattering structures are interesting multi-purpose optical nanoantennas to realize highly flexible spatiotemporal optical near-field control.

  1. Performance optimization of an MHD generator with physical constraints

    NASA Technical Reports Server (NTRS)

    Pian, C. C. P.; Seikel, G. R.; Smith, J. M.

    1979-01-01

    A technique has been described which optimizes the power out of a Faraday MHD generator operating under a prescribed set of electrical and magnetic constraints. The method does not rely on complicated numerical optimization techniques. Instead the magnetic field and the electrical loading are adjusted at each streamwise location such that the resultant generator design operates at the most limiting of the cited stress levels. The simplicity of the procedure makes it ideal for optimizing generator designs for system analysis studies of power plants. The resultant locally optimum channel designs are, however, not necessarily the global optimum designs. The results of generator performance calculations are presented for an approximately 2000 MWe size plant. The difference between the maximum power generator design and the optimal design which maximizes net MHD power are described. The sensitivity of the generator performance to the various operational parameters are also presented.

  2. Routing performance analysis and optimization within a massively parallel computer

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  3. Optimization and performance of Space Station Freedom solar cells

    NASA Technical Reports Server (NTRS)

    Khemthong, S.; Hansen, N.; Bower, M.

    1991-01-01

    High efficiency, large area and low cost solar cells are the drivers for Space Station solar array designs. The manufacturing throughput, process complexity, yield of the cells, and array manufacturing technique determine the economics of the solar array design. The cell efficiency optimization of large area (8 x 8 m), dielectric wrapthrough contact solar cells are described. The results of the optimization are reported and the solar cell performance of limited production runs is reported.

  4. Adaptive Control for Linear Uncertain Systems with Unmodeled Dynamics Revisited via Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2013-01-01

    This paper presents the optimal control modification for linear uncertain plants. The Lyapunov analysis shows that the modification parameter has a limiting value depending on the nature of the uncertainty. The optimal control modification exhibits a linear asymptotic property that enables it to be analyzed in a linear time invariant framework for linear uncertain plants. The linear asymptotic property shows that the closed-loop plants in the limit possess a scaled input-output mapping. Using this property, we can derive an analytical closed-loop transfer function in the limit as the adaptive gain tends to infinity. The paper revisits the Rohrs counterexample problem that illustrates the nature of non-robustness of model-reference adaptive control in the presence of unmodeled dynamics. An analytical approach is developed to compute exactly the modification parameter for the optimal control modification that stabilizes the plant in the Rohrs counterexample. The linear asymptotic property is also used to address output feedback adaptive control for non-minimum phase plants with a relative degree 1.

  5. Performance optimization of an MHD generator with physical constraints

    NASA Technical Reports Server (NTRS)

    Pian, C. C. P.; Seikel, G. R.; Smith, J. M.

    1979-01-01

    A method to optimize the Faraday MHD generator performance under a prescribed set of electrical and magnet constraints is described. The results of generator performance calculations using this technique are presented for a very large MHD/steam plant. The differences between the maximum power and maximum net power generators are described. The sensitivity of the generator performance to the various operational parameters are presented.

  6. Proficient brain for optimal performance: the MAP model perspective

    PubMed Central

    di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio

    2016-01-01

    Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the “neural efficiency hypothesis.” We also observed more ERD as related to optimal-controlled performance in conditions of “neural adaptability” and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques. PMID:27257557

  7. An adaptive controller for enhancing operator performance during teleoperation

    NASA Technical Reports Server (NTRS)

    Carignan, Craig R.; Tarrant, Janice M.; Mosier, Gary E.

    1989-01-01

    An adaptive controller is developed for adjusting robot arm parameters while manipulating payloads of unknown mass and inertia. The controller is tested experimentally in a master/slave configuration where the adaptive slave arm is commanded via human operator inputs from a master. Kinematically similar six-joint master and slave arms are used with the last three joints locked for simplification. After a brief initial adaptation period for the unloaded arm, the slave arm retrieves different size payloads and maneuvers them about the workspace. Comparisons are then drawn with similar tasks where the adaptation is turned off. Several simplifications of the controller dynamics are also addressed and experimentally verified.

  8. Application of multi-objective controller to optimal tuning of PID gains for a hydraulic turbine regulating system using adaptive grid particle swam optimization.

    PubMed

    Chen, Zhihuan; Yuan, Yanbin; Yuan, Xiaohui; Huang, Yuehua; Li, Xianshan; Li, Wenwu

    2015-05-01

    A hydraulic turbine regulating system (HTRS) is one of the most important components of hydropower plant, which plays a key role in maintaining safety, stability and economical operation of hydro-electrical installations. At present, the conventional PID controller is widely applied in the HTRS system for its practicability and robustness, and the primary problem with respect to this control law is how to optimally tune the parameters, i.e. the determination of PID controller gains for satisfactory performance. In this paper, a kind of multi-objective evolutionary algorithms, named adaptive grid particle swarm optimization (AGPSO) is applied to solve the PID gains tuning problem of the HTRS system. This newly AGPSO optimized method, which differs from a traditional one-single objective optimization method, is designed to take care of settling time and overshoot level simultaneously, in which a set of non-inferior alternatives solutions (i.e. Pareto solution) is generated. Furthermore, a fuzzy-based membership value assignment method is employed to choose the best compromise solution from the obtained Pareto set. An illustrative example associated with the best compromise solution for parameter tuning of the nonlinear HTRS system is introduced to verify the feasibility and the effectiveness of the proposed AGPSO-based optimization approach, as compared with two another prominent multi-objective algorithms, i.e. Non-dominated Sorting Genetic Algorithm II (NSGAII) and Strength Pareto Evolutionary Algorithm II (SPEAII), for the quality and diversity of obtained Pareto solutions set. Consequently, simulation results show that this AGPSO optimized approach outperforms than compared methods with higher efficiency and better quality no matter whether the HTRS system works under unload or load conditions. PMID:25481821

  9. Adaptable Metadata Rich IO Methods for Portable High Performance IO

    SciTech Connect

    Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten

    2009-01-01

    Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small

  10. Accelerated optimization and automated discovery with covariance matrix adaptation for experimental quantum control

    NASA Astrophysics Data System (ADS)

    Roslund, Jonathan; Shir, Ofer M.; Bäck, Thomas; Rabitz, Herschel

    2009-10-01

    Optimization of quantum systems by closed-loop adaptive pulse shaping offers a rich domain for the development and application of specialized evolutionary algorithms. Derandomized evolution strategies (DESs) are presented here as a robust class of optimizers for experimental quantum control. The combination of stochastic and quasi-local search embodied by these algorithms is especially amenable to the inherent topology of quantum control landscapes. Implementation of DES in the laboratory results in efficiency gains of up to ˜9 times that of the standard genetic algorithm, and thus is a promising tool for optimization of unstable or fragile systems. The statistical learning upon which these algorithms are predicated also provide the means for obtaining a control problem’s Hessian matrix with no additional experimental overhead. The forced optimal covariance adaptive learning (FOCAL) method is introduced to enable retrieval of the Hessian matrix, which can reveal information about the landscape’s local structure and dynamic mechanism. Exploitation of such algorithms in quantum control experiments should enhance their efficiency and provide additional fundamental insights.

  11. Accelerated optimization and automated discovery with covariance matrix adaptation for experimental quantum control

    SciTech Connect

    Roslund, Jonathan; Shir, Ofer M.; Rabitz, Herschel; Baeck, Thomas

    2009-10-15

    Optimization of quantum systems by closed-loop adaptive pulse shaping offers a rich domain for the development and application of specialized evolutionary algorithms. Derandomized evolution strategies (DESs) are presented here as a robust class of optimizers for experimental quantum control. The combination of stochastic and quasi-local search embodied by these algorithms is especially amenable to the inherent topology of quantum control landscapes. Implementation of DES in the laboratory results in efficiency gains of up to {approx}9 times that of the standard genetic algorithm, and thus is a promising tool for optimization of unstable or fragile systems. The statistical learning upon which these algorithms are predicated also provide the means for obtaining a control problem's Hessian matrix with no additional experimental overhead. The forced optimal covariance adaptive learning (FOCAL) method is introduced to enable retrieval of the Hessian matrix, which can reveal information about the landscape's local structure and dynamic mechanism. Exploitation of such algorithms in quantum control experiments should enhance their efficiency and provide additional fundamental insights.

  12. Communication Range Dynamics and Performance Analysis for a Self-Adaptive Transmission Power Controller.

    PubMed

    Lucas Martínez, Néstor; Martínez Ortega, José-Fernán; Hernández Díaz, Vicente; Del Toro Matamoros, Raúl M

    2016-01-01

    The deployment of the nodes in a Wireless Sensor and Actuator Network (WSAN) is typically restricted by the sensing and acting coverage. This implies that the locations of the nodes may be, and usually are, not optimal from the point of view of the radio communication. Additionally, when the transmission power is tuned for those locations, there are other unpredictable factors that can cause connectivity failures, like interferences, signal fading due to passing objects and, of course, radio irregularities. A control-based self-adaptive system is a typical solution to improve the energy consumption while keeping good connectivity. In this paper, we explore how the communication range for each node evolves along the iterations of an energy saving self-adaptive transmission power controller when using different parameter sets in an outdoor scenario, providing a WSAN that automatically adapts to surrounding changes keeping good connectivity. The results obtained in this paper show how the parameters with the best performance keep a k-connected network, where k is in the range of the desired node degree plus or minus a specified tolerance value. PMID:27187397

  13. Communication Range Dynamics and Performance Analysis for a Self-Adaptive Transmission Power Controller †

    PubMed Central

    Lucas Martínez, Néstor; Martínez Ortega, José-Fernán; Hernández Díaz, Vicente; del Toro Matamoros, Raúl M.

    2016-01-01

    The deployment of the nodes in a Wireless Sensor and Actuator Network (WSAN) is typically restricted by the sensing and acting coverage. This implies that the locations of the nodes may be, and usually are, not optimal from the point of view of the radio communication. Additionally, when the transmission power is tuned for those locations, there are other unpredictable factors that can cause connectivity failures, like interferences, signal fading due to passing objects and, of course, radio irregularities. A control-based self-adaptive system is a typical solution to improve the energy consumption while keeping good connectivity. In this paper, we explore how the communication range for each node evolves along the iterations of an energy saving self-adaptive transmission power controller when using different parameter sets in an outdoor scenario, providing a WSAN that automatically adapts to surrounding changes keeping good connectivity. The results obtained in this paper show how the parameters with the best performance keep a k-connected network, where k is in the range of the desired node degree plus or minus a specified tolerance value. PMID:27187397

  14. Error bounds of adaptive dynamic programming algorithms for solving undiscounted optimal control problems.

    PubMed

    Liu, Derong; Li, Hongliang; Wang, Ding

    2015-06-01

    In this paper, we establish error bounds of adaptive dynamic programming algorithms for solving undiscounted infinite-horizon optimal control problems of discrete-time deterministic nonlinear systems. We consider approximation errors in the update equations of both value function and control policy. We utilize a new assumption instead of the contraction assumption in discounted optimal control problems. We establish the error bounds for approximate value iteration based on a new error condition. Furthermore, we also establish the error bounds for approximate policy iteration and approximate optimistic policy iteration algorithms. It is shown that the iterative approximate value function can converge to a finite neighborhood of the optimal value function under some conditions. To implement the developed algorithms, critic and action neural networks are used to approximate the value function and control policy, respectively. Finally, a simulation example is given to demonstrate the effectiveness of the developed algorithms. PMID:25751878

  15. Reinforcement learning for adaptive optimal control of unknown continuous-time nonlinear systems with input constraints

    NASA Astrophysics Data System (ADS)

    Yang, Xiong; Liu, Derong; Wang, Ding

    2014-03-01

    In this paper, an adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem of constrained-input continuous-time nonlinear systems in the presence of nonlinearities with unknown structures. Two different types of neural networks (NNs) are employed to approximate the Hamilton-Jacobi-Bellman equation. That is, an recurrent NN is constructed to identify the unknown dynamical system, and two feedforward NNs are used as the actor and the critic to approximate the optimal control and the optimal cost, respectively. Based on this framework, the action NN and the critic NN are tuned simultaneously, without the requirement for the knowledge of system drift dynamics. Moreover, by using Lyapunov's direct method, the weights of the action NN and the critic NN are guaranteed to be uniformly ultimately bounded, while keeping the closed-loop system stable. To demonstrate the effectiveness of the present approach, simulation results are illustrated.

  16. Neural-network-observer-based optimal control for unknown nonlinear systems using adaptive dynamic programming

    NASA Astrophysics Data System (ADS)

    Liu, Derong; Huang, Yuzhu; Wang, Ding; Wei, Qinglai

    2013-09-01

    In this paper, an observer-based optimal control scheme is developed for unknown nonlinear systems using adaptive dynamic programming (ADP) algorithm. First, a neural-network (NN) observer is designed to estimate system states. Then, based on the observed states, a neuro-controller is constructed via ADP method to obtain the optimal control. In this design, two NN structures are used: a three-layer NN is used to construct the observer which can be applied to systems with higher degrees of nonlinearity and without a priori knowledge of system dynamics, and a critic NN is employed to approximate the value function. The optimal control law is computed using the critic NN and the observer NN. Uniform ultimate boundedness of the closed-loop system is guaranteed. The actor, critic, and observer structures are all implemented in real-time, continuously and simultaneously. Finally, simulation results are presented to demonstrate the effectiveness of the proposed control scheme.

  17. Adaptive polarimetric image representation for contrast optimization of a polarized beacon through fog

    NASA Astrophysics Data System (ADS)

    Panigrahi, Swapnesh; Fade, Julien; Alouini, Mehdi

    2015-06-01

    We present a contrast-maximizing optimal linear representation of polarimetric images obtained from a snapshot polarimetric camera for enhanced vision of a polarized light source in obscured weather conditions (fog, haze, cloud) over long distances (above 1 km). We quantitatively compare the gain in contrast obtained by different linear representations of the experimental polarimetric images taken during rapidly varying foggy conditions. It is shown that the adaptive image representation that depends on the correlation in background noise fluctuations in the two polarimetric images provides an optimal contrast enhancement over all weather conditions as opposed to a simple difference image which underperforms during low visibility conditions. Finally, we derive the analytic expression of the gain in contrast obtained with this optimal representation and show that the experimental results are in agreement with the assumed correlated Gaussian noise model.

  18. Beam width and transmitter power adaptive to tracking system performance for free-space optical communication.

    PubMed

    Arnon, S; Rotman, S; Kopeika, N S

    1997-08-20

    The basic free-space optical communication system includes at least two satellites. To communicate between them, the transmitter satellite must track the beacon of the receiver satellite and point the information optical beam in its direction. Optical tracking and pointing systems for free space suffer during tracking from high-amplitude vibration because of background radiation from interstellar objects such as the Sun, Moon, Earth, and stars in the tracking field of view or the mechanical impact from satellite internal and external sources. The vibrations of beam pointing increase the bit error rate and jam communication between the two satellites. One way to overcome this problem is to increase the satellite receiver beacon power. However, this solution requires increased power consumption and weight, both of which are disadvantageous in satellite development. Considering these facts, we derive a mathematical model of a communication system that adapts optimally the transmitter beam width and the transmitted power to the tracking system performance. Based on this model, we investigate the performance of a communication system with discrete element optical phased array transmitter telescope gain. An example for a practical communication system between a Low Earth Orbit Satellite and a Geostationary Earth Orbit Satellite is presented. From the results of this research it can be seen that a four-element adaptive transmitter telescope is sufficient to compensate for vibration amplitude doubling. The benefits of the proposed model are less required transmitter power and improved communication system performance. PMID:18259455

  19. Orbit design and optimization based on global telecommunication performance metrics

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Lee, Charles H.; Kerridge, Stuart; Cheung, Kar-Ming; Edwards, Charles D.

    2006-01-01

    The orbit selection of telecommunications orbiters is one of the critical design processes and should be guided by global telecom performance metrics and mission-specific constraints. In order to aid the orbit selection, we have coupled the Telecom Orbit Analysis and Simulation Tool (TOAST) with genetic optimization algorithms. As a demonstration, we have applied the developed tool to select an optimal orbit for general Mars telecommunications orbiters with the constraint of being a frozen orbit. While a typical optimization goal is to minimize tele-communications down time, several relevant performance metrics are examined: 1) area-weighted average gap time, 2) global maximum of local maximum gap time, 3) global maximum of local minimum gap time. Optimal solutions are found with each of the metrics. Common and different features among the optimal solutions as well as the advantage and disadvantage of each metric are presented. The optimal solutions are compared with several candidate orbits that were considered during the development of Mars Telecommunications Orbiter.

  20. Connective tissue adaptations in the fingers of performance sport climbers.

    PubMed

    Schreiber, Tonja; Allenspach, Philippe; Seifert, Burkhardt; Schweizer, Andreas

    2015-01-01

    This study investigates the changes of the connective tissue in the fingers of performance sport climbers resulting after a minimum of 15 years of climbing. Evaluation was performed by ultrasonography on the palmar side of the fingers (Dig) II-V to measure the thickness of the A2 and A4 annular pulleys, the flexor digitorum superficialis (FDS) and profundus (FDP) tendons and the palmar plates (PP's) of the proximal interphalangeal (PIP) as well as distal interphalangeal (DIP) joint in sagittal and axial direction. Totally, 31 experienced male sport climbers (mean age 37y, 30-48y grade French scale median 8b, range 7b+ to 9a+) participated in the study. The control-group consisted of 20 male non-climbers (age 37y, 30-51y). The A2 and A4 pulleys in climbers were all significantly thicker (A2 Dig III 62%, Dig IV 69%; A4 Dig III 69%, Dig IV 76%) as compared to non-climbers pulleys. All PP's of the DIP joints were also significantly thicker, particularly at Dig III and IV (76 and 67%), whereas the PP's at PIP joints were only scarce significant for three joints. Differences of the diameter of the flexor tendons were less distinct (1-21%) being significant only over the middle phalanx. High load to the fingers of rock climbers after a minimum of 15 years of climbing years induced considerable connective tissue adaptions in the fingers, most distinct at the flexor tendon pulleys and joint capsule (PP) of the DIP joints and well detectable by ultrasound. PMID:26267120

  1. Perturbing engine performance measurements to determine optimal engine control settings

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-12-30

    Methods and systems for optimizing a performance of a vehicle engine are provided. The method includes determining an initial value for a first engine control parameter based on one or more detected operating conditions of the vehicle engine, determining a value of an engine performance variable, and artificially perturbing the determined value of the engine performance variable. The initial value for the first engine control parameter is then adjusted based on the perturbed engine performance variable causing the engine performance variable to approach a target engine performance variable. Operation of the vehicle engine is controlled based on the adjusted initial value for the first engine control parameter. These acts are repeated until the engine performance variable approaches the target engine performance variable.

  2. Dynamic Aberration Correction for Conformal Window of High-Speed Aircraft Using Optimized Model-Based Wavefront Sensorless Adaptive Optics.

    PubMed

    Dong, Bing; Li, Yan; Han, Xin-Li; Hu, Bin

    2016-01-01

    For high-speed aircraft, a conformal window is used to optimize the aerodynamic performance. However, the local shape of the conformal window leads to large amounts of dynamic aberrations varying with look angle. In this paper, deformable mirror (DM) and model-based wavefront sensorless adaptive optics (WSLAO) are used for dynamic aberration correction of an infrared remote sensor equipped with a conformal window and scanning mirror. In model-based WSLAO, aberration is captured using Lukosz mode, and we use the low spatial frequency content of the image spectral density as the metric function. Simulations show that aberrations induced by the conformal window are dominated by some low-order Lukosz modes. To optimize the dynamic correction, we can only correct dominant Lukosz modes and the image size can be minimized to reduce the time required to compute the metric function. In our experiment, a 37-channel DM is used to mimic the dynamic aberration of conformal window with scanning rate of 10 degrees per second. A 52-channel DM is used for correction. For a 128 × 128 image, the mean value of image sharpness during dynamic correction is 1.436 × 10(-5) in optimized correction and is 1.427 × 10(-5) in un-optimized correction. We also demonstrated that model-based WSLAO can achieve convergence two times faster than traditional stochastic parallel gradient descent (SPGD) method. PMID:27598161

  3. Improving the Hydrodynamic Performance of Diffuser Vanes via Shape Optimization

    NASA Technical Reports Server (NTRS)

    Goel, Tushar; Dorney, Daniel J.; Haftka, Raphael T.; Shyy, Wei

    2007-01-01

    The performance of a diffuser in a pump stage depends on its configuration and placement within the stage. The influence of vane shape on the hydrodynamic performance of a diffuser has been studied. The goal of this effort has been to improve the performance of a pump stage by optimizing the shape of the diffuser vanes. The shape of the vanes was defined using Bezier curves and circular arcs. Surrogate model based tools were used to identify regions of the vane that have a strong influence on its performance. Optimization of the vane shape, in the absence of manufacturing, and stress constraints, led to a nearly nine percent reduction in the total pressure losses compared to the baseline design by reducing the extent of the base separation.

  4. Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors

    SciTech Connect

    Datta, Kaushik; Kamil, Shoaib; Williams, Samuel; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2007-06-01

    Stencil-based kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88% of algorithmic peak while the best competing cache-based processor achieves only 54% of algorithmic peak performance.

  5. The Future of Food: Regional Adaptation Strategies for Optimizing Grain Yields Under Climate Change

    NASA Astrophysics Data System (ADS)

    Nicholas, K. A.; Chhetri, N.; Girvetz, E. H.; McCarthy, H. R.; Twine, T. E.; Ummenhofer, C. C.

    2010-12-01

    Current projections of crop yields under climate change generally neglect to account for the potential for farmer adaptation to counteract environmental drivers of yield decreases, but such adaptation will be increasingly important for food security. We used a process-based crop model (Agro-IBIS) and a suite of climate projections based on multiple IPCC AR4 models under three greenhouse gas emission scenarios to project climate change impacts to yield of maize in Free State, South Africa, and Iowa, USA, and of wheat in Victoria, Australia, and Punjab, India. We found, for example in Iowa, that projected substantial increases in temperatures and slight decreases in precipitation result in a compressed growing period, with peak productivity occurring in mid-May rather than mid-July and yield decreasing by up to 40% below current levels by the end of the century. We then used this information to identify regionally-specific adaptation strategies by examining climate-limiting factors on the timing of harvest and quantity of yields in each location, and the current growing practices and resource availability. These adaptation strategies were developed with the intention of replicating current yields at current timing (for example, by selecting longer-ripening cultivars) and also to optimize yields under the new climate regime (for example, by double-cropping a maize/soy rotation in the same growing season). All in all, this research shows that promising adaptation options exist in each region, and highlight the need for sophisticated and regionally-sensitive adaptation strategies to sustain and increase food production in the 21st century.

  6. Architecture and performance of astronomical adaptive optics systems

    NASA Technical Reports Server (NTRS)

    Bloemhof, E.

    2002-01-01

    In recent years the technological advances of adaptive optics have enabled a great deal of innovative science. In this lecture I review the system-level design of modern astronomical AO instruments, and discuss their current capabilities.

  7. Empirical performance of the spectral independent morphological adaptive classifier

    NASA Astrophysics Data System (ADS)

    Montgomery, Joel B.; Montgomery, Christine T.; Sanderson, Richard B.; McCalmont, John F.

    2008-04-01

    Effective missile warning and countermeasures continue to be an unfulfilled goal for the Air Force including the wider military and civilian aerospace community. To make the necessary detection and jamming timeframes dictated by today's proliferated missiles and near-term upgraded threats, sensors with required sensitivity, field of regard, and spatial resolution are being pursued in conjunction with advanced processing techniques allowing for detection and discrimination beyond 10 km. The greatest driver of any missile warning system is detection and correct declaration, in which all targets need to be detected with a high confidence and with very few false alarms. Generally, imaging sensors are limited in their detection capability by the presence of heavy background clutter, sun glints, and inherent sensor noise. Many threat environments include false alarm sources like burning fuels, flares, exploding ordinance, and industrial emitters. Spectral discrimination has been shown to be one of the most effective methods of improving the performance of typical missile warning sensors, particularly for heavy clutter situations. Its utility has been demonstrated in the field and on-board multiple aircraft. Utilization of the background and clutter spectral content, coupled with additional spatial and temporal filtering techniques, have yielded robust adaptive real-time algorithms to increase signal-to-clutter ratios against point targets, and thereby to increase detection range. The algorithm outlined is the result of continued work with reported results against visible missile tactical data. The results are summarized and compared in terms of computational cost expected to be implemented on a real-time field-programmable gate array (FPGA) processor.

  8. Hull-form optimization of KSUEZMAX to enhance resistance performance

    NASA Astrophysics Data System (ADS)

    Park, Jong-Heon; Choi, Jung-Eun; Chun, Ho-Hwan

    2015-01-01

    This paper deploys optimization techniques to obtain the optimum hull form of KSUEZMAX at the conditions of full-load draft and design speed. The processes have been carried out using a RaPID-HOP program. The bow and the stern hull-forms are optimized separately without altering neither, and the resulting versions of the two are then combined. Objective functions are the minimum values of wave-making and viscous pressure resistance coefficients for the bow and stern. Parametric modification functions for the bow hull-form variation are SAC shape, section shape (U-V type, DLWL type), bulb shape (bulb height and size); and those for the stern are SAC and section shape (U-V type, DLWL type). WAVIS version 1.3 code is used for the potential and the viscous-flow solver. Prior to the optimization, a parametric study has been conducted to observe the effects of design parameters on the objective functions. SQP has been applied for the optimization algorithm. The model tests have been conducted at a towing tank to evaluate the resistance performance of the optimized hull-form. It has been noted that the optimized hull-form brings 2.4% and 6.8% reduction in total and residual resistance coefficients compared to those of the original hull-form. The propulsive efficiency increases by 2.0% and the delivered power is reduced 3.7%, whereas the propeller rotating speed increases slightly by 0.41 rpm.

  9. Simultaneous optimization of actuator position and control parameter for adaptive mechanical structures

    NASA Astrophysics Data System (ADS)

    Weber, Christian-Toralf; Gabbert, Ulrich; Enzmann, Marc R.

    1998-07-01

    The design of adaptive mechanical structures is divided into three parts: the structural design, the controller design and the placement of actuators and sensors. The objective of the design is to create a mechanical structure, which corresponds with the physical and technical requirements. The controller design includes the definition of the optimal controller law and the parameters required to create an actuator adjustment from the perceptible signals of the structural answer. The placement of the actuators and of the sensors give an answer to the question about the optimal distribution of the actuators and sensors in the structure. The sensor placement determines which signals are available to the automatic controller. The position of the actuators in the mechanical structure determines at which points control forces may act to influence the structural behavior in a suitable manner. The determination of the optimal position of the actuators require information about the controller design, the sensor position and the layout and the behavior of the structure. Based on the ideas of the shape optimization and topology optimization, a procedure will be presented, to handle simultaneously the discrete positions of the actuators and the continuous parameters of the controller. The method is based on an augmented Lagrangian function to include additional conditions and the discontinuity of the discrete variables into the objective function. The method will be demonstrated by an test example.

  10. Adaptive sensing and optimal power allocation for wireless video sensors with sigma-delta imager.

    PubMed

    Marijan, Malisa; Demirkol, Ilker; Maricić I, Danijel; Sharma, Gaurav; Ignjatovi, Zeljko

    2010-10-01

    We consider optimal power allocation for wireless video sensors (WVSs), including the image sensor subsystem in the system analysis. By assigning a power-rate-distortion (P-R-D) characteristic for the image sensor, we build a comprehensive P-R-D optimization framework for WVSs. For a WVS node operating under a power budget, we propose power allocation among the image sensor, compression, and transmission modules, in order to minimize the distortion of the video reconstructed at the receiver. To demonstrate the proposed optimization method, we establish a P-R-D model for an image sensor based upon a pixel level sigma-delta (Σ∆) image sensor design that allows investigation of the tradeoff between the bit depth of the captured images and spatio-temporal characteristics of the video sequence under the power constraint. The optimization results obtained in this setting confirm that including the image sensor in the system optimization procedure can improve the overall video quality under power constraint and prolong the lifetime of the WVSs. In particular, when the available power budget for a WVS node falls below a threshold, adaptive sensing becomes necessary to ensure that the node communicates useful information about the video content while meeting its power budget. PMID:20551000

  11. An Automated, Adaptive Framework for Optimizing Preprocessing Pipelines in Task-Based Functional MRI

    PubMed Central

    Churchill, Nathan W.; Spring, Robyn; Afshin-Pour, Babak; Dong, Fan; Strother, Stephen C.

    2015-01-01

    BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the “pipeline”) significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard “fixed” preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each), demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets. PMID:26161667

  12. An adaptive compromise programming method for multi-objective path optimization

    NASA Astrophysics Data System (ADS)

    Li, Rongrong; Leung, Yee; Lin, Hui; Huang, Bo

    2013-04-01

    Network routing problems generally involve multiple objectives which may conflict one another. An effective way to solve such problems is to generate a set of Pareto-optimal solutions that is small enough to be handled by a decision maker and large enough to give an overview of all possible trade-offs among the conflicting objectives. To accomplish this, the present paper proposes an adaptive method based on compromise programming to assist decision makers in identifying Pareto-optimal paths, particularly for non-convex problems. This method can provide an unbiased approximation of the Pareto-optimal alternatives by adaptively changing the origin and direction of search in the objective space via the dynamic updating of the largest unexplored region till an appropriately structured Pareto front is captured. To demonstrate the efficacy of the proposed methodology, a case study is carried out for the transportation of dangerous goods in the road network of Hong Kong with the support of geographic information system. The experimental results confirm the effectiveness of the approach.

  13. An auto-adaptive optimization approach for targeting nonpoint source pollution control practices

    PubMed Central

    Chen, Lei; Wei, Guoyuan; Shen, Zhenyao

    2015-01-01

    To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs. PMID:26487474

  14. Adaptive optimal control of unknown constrained-input systems using policy iteration and neural networks.

    PubMed

    Modares, Hamidreza; Lewis, Frank L; Naghibi-Sistani, Mohammad-Bagher

    2013-10-01

    This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example. PMID:24808590

  15. A Study on the Optimization Performance of Fireworks and Cuckoo Search Algorithms in Laser Machining Processes

    NASA Astrophysics Data System (ADS)

    Goswami, D.; Chakraborty, S.

    2014-11-01

    Laser machining is a promising non-contact process for effective machining of difficult-to-process advanced engineering materials. Increasing interest in the use of lasers for various machining operations can be attributed to its several unique advantages, like high productivity, non-contact processing, elimination of finishing operations, adaptability to automation, reduced processing cost, improved product quality, greater material utilization, minimum heat-affected zone and green manufacturing. To achieve the best desired machining performance and high quality characteristics of the machined components, it is extremely important to determine the optimal values of the laser machining process parameters. In this paper, fireworks algorithm and cuckoo search (CS) algorithm are applied for single as well as multi-response optimization of two laser machining processes. It is observed that although almost similar solutions are obtained for both these algorithms, CS algorithm outperforms fireworks algorithm with respect to average computation time, convergence rate and performance consistency.

  16. Predicting the optimized thermoelectric performance of MgAgSb

    NASA Astrophysics Data System (ADS)

    Sheng, C. Y.; Liu, H. J.; Fan, D. D.; Cheng, L.; Zhang, J.; Wei, J.; Liang, J. H.; Jiang, P. H.; Shi, J.

    2016-05-01

    Using first-principles method and Boltzmann theory, we provide an accurate prediction of the electronic band structure and thermoelectric transport properties of α-MgAgSb. Our calculations demonstrate that only when an appropriate exchange-correlation functional is chosen can we correctly reproduce the semiconducting nature of this compound. By fine tuning the carrier concentration, the thermoelectric performance of α-MgAgSb can be significantly optimized, which exhibits a strong temperature dependence and gives a maximum ZT value of 1.7 at 550 K. We also provide a simple map by which one can efficiently find the best doping atoms and optimal doping content.

  17. Development of a new adaptive ordinal approach to continuous-variable probabilistic optimization.

    SciTech Connect

    Romero, Vicente JosÔe; Chen, Chun-Hung (George Mason University, Fairfax, VA)

    2006-11-01

    A very general and robust approach to solving continuous-variable optimization problems involving uncertainty in the objective function is through the use of ordinal optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the uncertainty effects on local design alternatives, rather than on precise quantification of the effects. One simply asks ''Is that alternative better or worse than this one?'' -not ''HOW MUCH better or worse is that alternative to this one?'' The answer to the latter question requires precise characterization of the uncertainty--with the corresponding sampling/integration expense for precise resolution. However, in this report we demonstrate correct decision-making in a continuous-variable probabilistic optimization problem despite extreme vagueness in the statistical characterization of the design options. We present a new adaptive ordinal method for probabilistic optimization in which the trade-off between computational expense and vagueness in the uncertainty characterization can be conveniently managed in various phases of the optimization problem to make cost-effective stepping decisions in the design space. Spatial correlation of uncertainty in the continuous-variable design space is exploited to dramatically increase method efficiency. Under many circumstances the method appears to have favorable robustness and cost-scaling properties relative to other probabilistic optimization methods, and uniquely has mechanisms for quantifying and controlling error likelihood in design-space stepping decisions. The method is asymptotically convergent to the true probabilistic optimum, so could be useful as a reference standard against which the efficiency and robustness of other methods can be compared--analogous to the role that Monte Carlo simulation plays in uncertainty propagation.

  18. Performance optimization for rotors in hover and axial flight

    NASA Technical Reports Server (NTRS)

    Quackenbush, T. R.; Wachspress, D. A.; Kaufman, A. E.; Bliss, D. B.

    1989-01-01

    Performance optimization for rotors in hover and axial flight is a topic of continuing importance to rotorcraft designers. The aim of this Phase 1 effort has been to demonstrate that a linear optimization algorithm could be coupled to an existing influence coefficient hover performance code. This code, dubbed EHPIC (Evaluation of Hover Performance using Influence Coefficients), uses a quasi-linear wake relaxation to solve for the rotor performance. The coupling was accomplished by expanding of the matrix of linearized influence coefficients in EHPIC to accommodate design variables and deriving new coefficients for linearized equations governing perturbations in power and thrust. These coefficients formed the input to a linear optimization analysis, which used the flow tangency conditions on the blade and in the wake to impose equality constraints on the expanded system of equations; user-specified inequality contraints were also employed to bound the changes in the design. It was found that this locally linearized analysis could be invoked to predict a design change that would produce a reduction in the power required by the rotor at constant thrust. Thus, an efficient search for improved versions of the baseline design can be carried out while retaining the accuracy inherent in a free wake/lifting surface performance analysis.

  19. Performance and optimization of X-ray grating interferometry.

    PubMed

    Thuering, T; Stampanoni, M

    2014-03-01

    The monochromatic and polychromatic performance of a grating interferometer is theoretically analysed. The smallest detectable refraction angle is used as a metric for the efficiency in acquiring a differential phase-contrast image. Analytical formulae for the visibility and the smallest detectable refraction angle are derived for Talbot-type and Talbot-Lau-type interferometers, respectively, providing a framework for the optimization of the geometry. The polychromatic performance of a grating interferometer is investigated analytically by calculating the energy-dependent interference fringe visibility, the spectral acceptance and the polychromatic interference fringe visibility. The optimization of grating interferometry is a crucial step for the design of application-specific systems with maximum performance. PMID:24470411

  20. Evaluation of adaptive dynamic range optimization in adverse listening conditions for cochlear implants

    PubMed Central

    Ali, Hussnain; Hazrati, Oldooz; Tobey, Emily A.; Hansen, John H. L

    2014-01-01

    The aim of this study is to investigate the effect of Adaptive Dynamic Range Optimization (ADRO) on speech identification for cochlear implant (CI) users in adverse listening conditions. In this study, anechoic quiet, noisy, reverberant, noisy reverberant, and reverberant noisy conditions are evaluated. Two scenarios are considered when modeling the combined effects of reverberation and noise: (a) noise is added to the reverberant speech, and (b) noisy speech is reverberated. CI users were tested in different listening environments using IEEE sentences presented at 65 dB sound pressure level. No significant effect of ADRO processing on speech intelligibility was observed. PMID:25190428

  1. Evaluation of adaptive dynamic range optimization in adverse listening conditions for cochlear implants.

    PubMed

    Ali, Hussnain; Hazrati, Oldooz; Tobey, Emily A; Hansen, John H L

    2014-09-01

    The aim of this study is to investigate the effect of Adaptive Dynamic Range Optimization (ADRO) on speech identification for cochlear implant (CI) users in adverse listening conditions. In this study, anechoic quiet, noisy, reverberant, noisy reverberant, and reverberant noisy conditions are evaluated. Two scenarios are considered when modeling the combined effects of reverberation and noise: (a) noise is added to the reverberant speech, and (b) noisy speech is reverberated. CI users were tested in different listening environments using IEEE sentences presented at 65 dB sound pressure level. No significant effect of ADRO processing on speech intelligibility was observed. PMID:25190428

  2. Workshop on system tuning, performance measurement and performance optimization of an RSX11M system

    SciTech Connect

    Downward, J.G.

    1981-05-20

    Topics discussed include thrashing in an RSX11M system - what to do; using solid state disk emulators as the swapping device - performance improvement, performance measurement techniques; capacity planning; bis buffering; and DECNET-11M optimization - performance that can be expected for real environments.

  3. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  4. Adaptable structural synthesis using advanced analysis and optimization coupled by a computer operating system

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Bhat, R. B.

    1979-01-01

    A finite element program is linked with a general purpose optimization program in a 'programing system' which includes user supplied codes that contain problem dependent formulations of the design variables, objective function and constraints. The result is a system adaptable to a wide spectrum of structural optimization problems. In a sample of numerical examples, the design variables are the cross-sectional dimensions and the parameters of overall shape geometry, constraints are applied to stresses, displacements, buckling and vibration characteristics, and structural mass is the objective function. Thin-walled, built-up structures and frameworks are included in the sample. Details of the system organization and characteristics of the component programs are given.

  5. A wavelet-optimized, very high order adaptive grid and order numerical method

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Differencing operators of arbitrarily high order can be constructed by interpolating a polynomial through a set of data followed by differentiation of this polynomial and finally evaluation of the polynomial at the point where a derivative approximation is desired. Furthermore, the interpolating polynomial can be constructed from algebraic, trigonometric, or, perhaps exponential polynomials. This paper begins with a comparison of such differencing operator construction. Next, the issue of proper grids for high order polynomials is addressed. Finally, an adaptive numerical method is introduced which adapts the numerical grid and the order of the differencing operator depending on the data. The numerical grid adaptation is performed on a Chebyshev grid. That is, at each level of refinement the grid is a Chebvshev grid and this grid is refined locally based on wavelet analysis.

  6. Field of view selection for optimal airborne imaging sensor performance

    NASA Astrophysics Data System (ADS)

    Goss, Tristan M.; Barnard, P. Werner; Fildis, Halidun; Erbudak, Mustafa; Senger, Tolga; Alpman, Mehmet E.

    2014-05-01

    The choice of the Field of View (FOV) of imaging sensors used in airborne targeting applications has major impact on the overall performance of the system. Conducting a market survey from published data on sensors used in stabilized airborne targeting systems shows a trend of ever narrowing FOVs housed in smaller and lighter volumes. This approach promotes the ever increasing geometric resolution provided by narrower FOVs, while it seemingly ignores the influences the FOV selection has on the sensor's sensitivity, the effects of diffraction, the influences of sight line jitter and collectively the overall system performance. This paper presents a trade-off methodology to select the optimal FOV for an imaging sensor that is limited in aperture diameter by mechanical constraints (such as space/volume available and window size) by balancing the influences FOV has on sensitivity and resolution and thereby optimizing the system's performance. The methodology may be applied to staring array based imaging sensors across all wavebands from visible/day cameras through to long wave infrared thermal imagers. Some examples of sensor analysis applying the trade-off methodology are given that highlights the performance advantages that can be gained by maximizing the aperture diameters and choosing the optimal FOV for an imaging sensor used in airborne targeting applications.

  7. Strong stabilization servo controller with optimization of performance criteria.

    PubMed

    Sarjaš, Andrej; Svečko, Rajko; Chowdhury, Amor

    2011-07-01

    Synthesis of a simple robust controller with a pole placement technique and a H(∞) metrics is the method used for control of a servo mechanism with BLDC and BDC electric motors. The method includes solving a polynomial equation on the basis of the chosen characteristic polynomial using the Manabe standard polynomial form and parametric solutions. Parametric solutions are introduced directly into the structure of the servo controller. On the basis of the chosen parametric solutions the robustness of a closed-loop system is assessed through uncertainty models and assessment of the norm ‖•‖(∞). The design procedure and the optimization are performed with a genetic algorithm differential evolution - DE. The DE optimization method determines a suboptimal solution throughout the optimization on the basis of a spectrally square polynomial and Šiljak's absolute stability test. The stability of the designed controller during the optimization is being checked with Lipatov's stability condition. Both utilized approaches: Šiljak's test and Lipatov's condition, check the robustness and stability characteristics on the basis of the polynomial's coefficients, and are very convenient for automated design of closed-loop control and for application in optimization algorithms such as DE. PMID:21501837

  8. Partially supervised P300 speller adaptation for eventual stimulus timing optimization: target confidence is superior to error-related potential score as an uncertain label

    NASA Astrophysics Data System (ADS)

    Zeyl, Timothy; Yin, Erwei; Keightley, Michelle; Chau, Tom

    2016-04-01

    Objective. Error-related potentials (ErrPs) have the potential to guide classifier adaptation in BCI spellers, for addressing non-stationary performance as well as for online optimization of system parameters, by providing imperfect or partial labels. However, the usefulness of ErrP-based labels for BCI adaptation has not been established in comparison to other partially supervised methods. Our objective is to make this comparison by retraining a two-step P300 speller on a subset of confident online trials using naïve labels taken from speller output, where confidence is determined either by (i) ErrP scores, (ii) posterior target scores derived from the P300 potential, or (iii) a hybrid of these scores. We further wish to evaluate the ability of partially supervised adaptation and retraining methods to adjust to a new stimulus-onset asynchrony (SOA), a necessary step towards online SOA optimization. Approach. Eleven consenting able-bodied adults attended three online spelling sessions on separate days with feedback in which SOAs were set at 160 ms (sessions 1 and 2) and 80 ms (session 3). A post hoc offline analysis and a simulated online analysis were performed on sessions two and three to compare multiple adaptation methods. Area under the curve (AUC) and symbols spelled per minute (SPM) were the primary outcome measures. Main results. Retraining using supervised labels confirmed improvements of 0.9 percentage points (session 2, p < 0.01) and 1.9 percentage points (session 3, p < 0.05) in AUC using same-day training data over using data from a previous day, which supports classifier adaptation in general. Significance. Using posterior target score alone as a confidence measure resulted in the highest SPM of the partially supervised methods, indicating that ErrPs are not necessary to boost the performance of partially supervised adaptive classification. Partial supervision significantly improved SPM at a novel SOA, showing promise for eventual online SOA

  9. Challenges when performing economic optimization of waste treatment: A review

    SciTech Connect

    Juul, N.; Münster, M.; Ravn, H.; Söderman, M. Ljunggren

    2013-09-15

    Highlights: • Review of main optimization tools in the field of waste management. • Different optimization methods are applied. • Different fractions are analyzed. • There is focus on different parameters in different geographical regions. • More research is needed which encompasses both recycling and energy solutions. - Abstract: Strategic and operational decisions in waste management, in particular with respect to investments in new treatment facilities, are needed due to a number of factors, including continuously increasing amounts of waste, political demands for efficient utilization of waste resources, and the decommissioning of existing waste treatment facilities. Optimization models can assist in ensuring that these investment strategies are economically feasible. Various economic optimization models for waste treatment have been developed which focus on different parameters. Models focusing on transport are one example, but models focusing on energy production have also been developed, as well as models which take into account a plant’s economies of scale, environmental impact, material recovery and social costs. Finally, models combining different criteria for the selection of waste treatment methods in multi-criteria analysis have been developed. A thorough updated review of the existing models is presented, and the main challenges and crucial parameters that need to be taken into account when assessing the economic performance of waste treatment alternatives are identified. The review article will assist both policy-makers and model-developers involved in assessing the economic performance of waste treatment alternatives.

  10. Airbag Landing Impact Performance Optimization for the Orion Crew Module

    NASA Technical Reports Server (NTRS)

    Lee, Timothy J.; McKinney, John; Corliss, James M.

    2008-01-01

    This report will discuss the use of advanced simulation techniques to optimize the performance of the proposed Orion Crew Module airbag landing system design. The Boeing Company and the National Aeronautic and Space Administration s Langley Research Center collaborated in the analysis of the proposed airbag landing system for the next generation space shuttle replacement, the Orion spacecraft. Using LS-DYNA to simulate the Crew Module landing impacts, two main objectives were established and achieved: the investigation of potential methods of optimizing the airbag performance in order to reduce rebound on the anti-bottoming bags, lower overall landing loads, and increase overall Crew Module stability; and the determination of the Crew Module stability and load boundaries using the optimized airbag design, based on the potential Crew Module landing pitch angles and ground slopes in both the center of gravity forward and aft configurations. This paper describes the optimization and stability and load boundary studies and presents a summary of the results obtained and key lessons learned from this analysis.

  11. Using condenser performance measurements to optimize condenser cleaning

    SciTech Connect

    Wolff, P.J.; March, A.; Pearson, H.S.

    1996-05-01

    Because plant personnel perform condenser monitoring primarily to determine cleaning schedules, the accuracy and repeatability of a technique should be viewed within the context of a condenser cleaning schedule. Lower accuracy is acceptable if the cleaning schedule arising from that system is identical to a cleaning schedule arising from a technique with higher accuracy. Three condenser performance monitors were implemented and compared within the context of a condenser cleaning schedule to determine the relative advantages of different condenser monitoring techniques. These systems include a novel on-line system that consists of an electromagnetic flowmeter and an RTD mounted in a compact waterproof cylinder, an overall on-line system, and routine plant tests. The fouling measurements from each system are used in an optimization program which automatically computes a cleaning schedule that minitrack the combined cost of cleaning and the cost of increased fuel consumption caused by condenser fouling. The cleaning schedules resulting from each system`s measurements are compared. The optimization routine is also used to evaluate the sensitivity of optimal cleaning schedules to fouling rate and of the cost in dollars for non-optimal cleaning.

  12. Optimization algorithm in adaptive PMD compensation in 10Gb/s optical communication system

    NASA Astrophysics Data System (ADS)

    Diao, Cao; Li, Tangjun; Wang, Muguang; Gong, Xiangfeng

    2005-02-01

    In this paper, the optimization algorithms are introduced in adaptive PMD compensation in 10Gb/s optical communication system. The PMD monitoring technique based on degree of polarization (DOP) is adopted. DOP can be a good indicator of PMD with monotonically deceasing of DOP as differential group delay (DGD) increasing. In order to use DOP as PMD monitoring feedback signal, it is required to emulate the state of DGD in the transmission circuitry. A PMD emulator is designed. A polarization controller (PC) is used in fiber multiplexer to adjust the polarization state of optical signal, and at the output of the fiber multiplexer a polarizer is used. After the feedback signal reach the control computer, the optimization program run to search the global optimization spot and through the PC to control the PMD. Several popular modern nonlinear optimization algorithms (Tabu Search, Simulated Annealing, Genetic Algorithm, Artificial Neural Networks, Ant Colony Optimization etc.) are discussed and the comparisons among them are made to choose the best optimization algorithm. Every algorithm has its advantage and disadvantage, but in this circs the Genetic Algorithm (GA) may be the best. It eliminates the worsen spots constantly and lets them have no chance to enter the circulation. So it has the quicker convergence velocity and less time. The PMD can be compensated in very few steps by using this algorithm. As a result, the maximum compensation ability of the one-stage PMD and two-stage PMD can be made in very short time, and the dynamic compensation time is no more than 10ms.

  13. A DVH-guided IMRT optimization algorithm for automatic treatment planning and adaptive radiotherapy replanning

    SciTech Connect

    Zarepisheh, Masoud; Li, Nan; Long, Troy; Romeijn, H. Edwin; Tian, Zhen; Jia, Xun; Jiang, Steve B.

    2014-06-15

    Purpose: To develop a novel algorithm that incorporates prior treatment knowledge into intensity modulated radiation therapy optimization to facilitate automatic treatment planning and adaptive radiotherapy (ART) replanning. Methods: The algorithm automatically creates a treatment plan guided by the DVH curves of a reference plan that contains information on the clinician-approved dose-volume trade-offs among different targets/organs and among different portions of a DVH curve for an organ. In ART, the reference plan is the initial plan for the same patient, while for automatic treatment planning the reference plan is selected from a library of clinically approved and delivered plans of previously treated patients with similar medical conditions and geometry. The proposed algorithm employs a voxel-based optimization model and navigates the large voxel-based Pareto surface. The voxel weights are iteratively adjusted to approach a plan that is similar to the reference plan in terms of the DVHs. If the reference plan is feasible but not Pareto optimal, the algorithm generates a Pareto optimal plan with the DVHs better than the reference ones. If the reference plan is too restricting for the new geometry, the algorithm generates a Pareto plan with DVHs close to the reference ones. In both cases, the new plans have similar DVH trade-offs as the reference plans. Results: The algorithm was tested using three patient cases and found to be able to automatically adjust the voxel-weighting factors in order to generate a Pareto plan with similar DVH trade-offs as the reference plan. The algorithm has also been implemented on a GPU for high efficiency. Conclusions: A novel prior-knowledge-based optimization algorithm has been developed that automatically adjust the voxel weights and generate a clinical optimal plan at high efficiency. It is found that the new algorithm can significantly improve the plan quality and planning efficiency in ART replanning and automatic treatment

  14. Multi-objective optimization of gear forging process based on adaptive surrogate meta-models

    NASA Astrophysics Data System (ADS)

    Meng, Fanjuan; Labergere, Carl; Lafon, Pascal; Daniel, Laurent

    2013-05-01

    In forging industry, net shape or near net shape forging of gears has been the subject of considerable research effort in the last few decades. So in this paper, a multi-objective optimization methodology of net shape gear forging process design has been discussed. The study is mainly done in four parts: building parametric CAD geometry model, simulating the forging process, fitting surrogate meta-models and optimizing the process by using an advanced algorithm. In order to maximally appropriate meta-models of the real response, an adaptive meta-model based design strategy has been applied. This is a continuous process: first, bui Id a preliminary version of the meta-models after the initial simulated calculations; second, improve the accuracy and update the meta-models by adding some new representative samplings. By using this iterative strategy, the number of the initial sample points for real numerical simulations is greatly decreased and the time for the forged gear design is significantly shortened. Finally, an optimal design for an industrial application of a 27-teeth gear forging process was introduced, which includes three optimization variables and two objective functions. A 3D FE nu merical simulation model is used to realize the process and an advanced thermo-elasto-visco-plastic constitutive equation is considered to represent the material behavior. The meta-model applied for this example is kriging and the optimization algorithm is NSGA-II. At last, a relatively better Pareto optimal front (POF) is gotten with gradually improving the obtained surrogate meta-models.

  15. A multi-layer robust adaptive fault tolerant control system for high performance aircraft

    NASA Astrophysics Data System (ADS)

    Huo, Ying

    Modern high-performance aircraft demand advanced fault-tolerant flight control strategies. Not only the control effector failures, but the aerodynamic type failures like wing-body damages often result in substantially deteriorate performance because of low available redundancy. As a result the remaining control actuators may yield substantially lower maneuvering capabilities which do not authorize the accomplishment of the air-craft's original specified mission. The problem is to solve the control reconfiguration on available control redundancies when the mission modification is urged to save the aircraft. The proposed robust adaptive fault-tolerant control (RAFTC) system consists of a multi-layer reconfigurable flight controller architecture. It contains three layers accounting for different types and levels of failures including sensor, actuator, and fuselage damages. In case of the nominal operation with possible minor failure(s) a standard adaptive controller stands to achieve the control allocation. This is referred to as the first layer, the controller layer. The performance adjustment is accounted for in the second layer, the reference layer, whose role is to adjust the reference model in the controller design with a degraded transit performance. The upmost mission adjust is in the third layer, the mission layer, when the original mission is not feasible with greatly restricted control capabilities. The modified mission is achieved through the optimization of the command signal which guarantees the boundedness of the closed-loop signals. The main distinguishing feature of this layer is the the mission decision property based on the current available resources. The contribution of the research is the multi-layer fault-tolerant architecture that can address the complete failure scenarios and their accommodations in realities. Moreover, the emphasis is on the mission design capabilities which may guarantee the stability of the aircraft with restricted post

  16. Multi-Objective Optimal Design of Switch Reluctance Motors Using Adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Rashidi, Mehran; Rashidi, Farzan

    In this paper a design methodology based on multi objective genetic algorithm (MOGA) is presented to design the switched reluctance motors with multiple conflicting objectives such as efficiency, power factor, full load torque, and full load current, specified dimension, weight of cooper and iron and also manufacturing cost. The optimally designed motor is compared with an industrial motor having the same ratings. Results verify that the proposed method gives better performance for the multi-objective optimization problems. The results of optimal design show the reduction in the specified dimension, weight and manufacturing cost, and the improvement in the power factor, full load torque, and efficiency of the motor.A major advantage of the method is its quite short response time in obtaining the optimal design.

  17. Power and Performance Trade-offs for Space Time Adaptive Processing

    SciTech Connect

    Gawande, Nitin A.; Manzano Franco, Joseph B.; Tumeo, Antonino; Tallent, Nathan R.; Kerbyson, Darren J.; Hoisie, Adolfy

    2015-07-27

    Computational efficiency – performance relative to power or energy – is one of the most important concerns when designing RADAR processing systems. This paper analyzes power and performance trade-offs for a typical Space Time Adaptive Processing (STAP) application. We study STAP implementations for CUDA and OpenMP on two computationally efficient architectures, Intel Haswell Core I7-4770TE and NVIDIA Kayla with a GK208 GPU. We analyze the power and performance of STAP’s computationally intensive kernels across the two hardware testbeds. We also show the impact and trade-offs of GPU optimization techniques. We show that data parallelism can be exploited for efficient implementation on the Haswell CPU architecture. The GPU architecture is able to process large size data sets without increase in power requirement. The use of shared memory has a significant impact on the power requirement for the GPU. A balance between the use of shared memory and main memory access leads to an improved performance in a typical STAP application.

  18. Performance Characterization of KAPAO, a Low-Cost Natural Guide Star Adaptive Optics Instrument

    NASA Astrophysics Data System (ADS)

    Long, Joseph; Choi, P. I.; Severson, S. A.; Littleton, E.; Badham, K.; Bolger, D.; Guerrero, C.; Ortega, F.; Wong, J.; Baranec, C.; Riddle, R. L.

    2014-01-01

    We present a software overview of KAPAO, an adaptive optics system designed for the Pomona College 1-meter telescope at Table Mountain Observatory. The instrument is currently in the commissioning phase and data presented here are from both in-lab and on-sky observations. In an effort to maximize on-sky performance, we have developed a suite of instrument-specific data analysis tools. This suite of tools aids in the alignment of the instrument's optics, and the optimization of on-sky performance. The analysis suite visualizes and extends the telemetry output by the Robo-AO control software. This includes visualization of deformable mirror and wavefront sensor telemetry and a Zernike decomposition of the residual wavefront error. We complement this with analysis tools for the science camera data. We model a synthetic PSF for the Table Mountain telescope to calibrate our Strehl measurements, and process image data cubes to track instrument performance over the course of an observation. By coupling WFS telemetry with science camera data we can use image sharpening techniques to account for non-common-path wavefront errors and improve image performance. Python packages for scientific computing, such as NumPy and Matplotlib, are employed to complement existing IDL code. A primary goal of this suite of software is to support the remote use of the system by a broad range of users that includes faculty and undergraduate students from the consortium of member campuses.

  19. Performance of Adaptive Trellis Coded Modulation Applied to MC-CDMA with Bi-orthogonal Keying

    NASA Astrophysics Data System (ADS)

    Tanaka, Hirokazu; Yamasaki, Shoichiro; Haseyama, Miki

    A Generalized Symbol-rate-increased (GSRI) Pragmatic Adaptive Trellis Coded Modulation (ATCM) is applied to a Multi-carrier CDMA (MC-CDMA) system with bi-orthogonal keying is analyzed. The MC-CDMA considered in this paper is that the input sequence of a bi-orthogonal modulator has code selection bit sequence and sign bit sequence. In [9], an efficient error correction code using Reed-Solomon (RS) code for the code selection bit sequence has been proposed. However, since BPSK is employed for the sign bit modulation, no error correction code is applied to it. In order to realize a high speed wireless system, a multi-level modulation scheme (e.g. MPSK, MQAM, etc.) is desired. In this paper, we investigate the performance of the MC-CDMA with bi-orthogonal keying employing GSRI ATCM. GSRI TC-MPSK can arbitrarily set the bandwidth expansion ratio keeping higher coding gain than the conventional pragmatic TCM scheme. By changing the modulation scheme and the bandwidth expansion ratio (coding rate), this scheme can optimize the performance according to the channel conditions. The performance evaluations by simulations on an AWGN channel and multi-path fading channels are presented. It is shown that the proposed scheme has remarkable throughput performance than that of the conventional scheme.

  20. Multi-optimization Criteria-based Robot Behavioral Adaptability and Motion Planning

    SciTech Connect

    Pin, Francois G.

    2002-06-01

    Robotic tasks are typically defined in Task Space (e.g., the 3-D World), whereas robots are controlled in Joint Space (motors). The transformation from Task Space to Joint Space must consider the task objectives (e.g., high precision, strength optimization, torque optimization), the task constraints (e.g., obstacles, joint limits, non-holonomic constraints, contact or tool task constraints), and the robot kinematics configuration (e.g., tools, type of joints, mobile platform, manipulator, modular additions, locked joints). Commercially available robots are optimized for a specific set of tasks, objectives and constraints and, therefore, their control codes are extremely specific to a particular set of conditions. Thus, there exist a multiplicity of codes, each handling a particular set of conditions, but none suitable for use on robots with widely varying tasks, objectives, constraints, or environments. On the other hand, most DOE missions and tasks are typically ''batches of one''. Attempting to use commercial codes for such work requires significant personnel and schedule costs for re-programming or adding code to the robots whenever a change in task objective, robot configuration, number and type of constraint, etc. occurs. The objective of our project is to develop a ''generic code'' to implement this Task-space to Joint-Space transformation that would allow robot behavior adaptation, in real time (at loop rate), to changes in task objectives, number and type of constraints, modes of controls, kinematics configuration (e.g., new tools, added module). Our specific goal is to develop a single code for the general solution of under-specified systems of algebraic equations that is suitable for solving the inverse kinematics of robots, is useable for all types of robots (mobile robots, manipulators, mobile manipulators, etc.) with no limitation on the number of joints and the number of controlled Task-Space variables, can adapt to real time changes in number and

  1. Preliminary flight evaluation of an engine performance optimization algorithm

    NASA Technical Reports Server (NTRS)

    Lambert, H. H.; Gilyard, G. B.; Chisholm, J. D.; Kerr, L. J.

    1991-01-01

    A performance seeking control (PSC) algorithm has undergone initial flight test evaluation in subsonic operation of a PW 1128 engined F-15. This algorithm is designed to optimize the quasi-steady performance of an engine for three primary modes: (1) minimum fuel consumption; (2) minimum fan turbine inlet temperature (FTIT); and (3) maximum thrust. The flight test results have verified a thrust specific fuel consumption reduction of 1 pct., up to 100 R decreases in FTIT, and increases of as much as 12 pct. in maximum thrust. PSC technology promises to be of value in next generation tactical and transport aircraft.

  2. An optimized DSP implementation of adaptive filtering and ICA for motion artifact reduction in ambulatory ECG monitoring.

    PubMed

    Berset, Torfinn; Geng, Di; Romero, Iñaki

    2012-01-01

    Noise from motion artifacts is currently one of the main challenges in the field of ambulatory ECG recording. To address this problem, we propose the use of two different approaches. First, an adaptive filter with electrode-skin impedance as a reference signal is described. Secondly, a multi-channel ECG algorithm based on Independent Component Analysis is introduced. Both algorithms have been designed and further optimized for real-time work embedded in a dedicated Digital Signal Processor. We show that both algorithms improve the performance of a beat detection algorithm when applied in high noise conditions. In addition, an efficient way of choosing this methods is suggested with the aim of reduce the overall total system power consumption. PMID:23367417

  3. Optimized calibration strategy for high order adaptive optics systems in closed-loop: the slope-oriented Hadamard actuation.

    PubMed

    Meimon, Serge; Petit, Cyril; Fusco, Thierry

    2015-10-19

    The accurate calibration of the interaction matrix affects the performance of an adaptive optics system. In the case of high-order systems, when the number of mirror modes is worth a few thousands, the calibration strategy is critical to reach the maximum interaction matrix quality in the minimum time. This is all the more true for the future European Extremely Large Telescope. Here, we propose a novel calibration scheme, the Slope-Oriented Hadamard strategy. We then build a tractable interaction matrix quality criterion, and show that our method tends to optimize it. We demonstrate that for a given level of quality, the calibration time needed using the Slope-Oriented Hadamard method is seven times less than with a classical Hadamard scheme. These analytic and simulation results are confirmed experimentally on the SPHERE XAO system (SAXO). PMID:26480374

  4. Performance Study and Dynamic Optimization Design for Thread Pool Systems

    SciTech Connect

    Dongping Xu

    2004-12-19

    Thread pools have been widely used by many multithreaded applications. However, the determination of the pool size according to the application behavior still remains problematic. To automate this process, in this thesis we have developed a set of performance metrics for quantitatively analyzing thread pool performance. For our experiments, we built a thread pool system which provides a general framework for thread pool research. Based on this simulation environment, we studied the performance impact brought by the thread pool on different multithreaded applications. Additionally, the correlations between internal characterizations of thread pools and their throughput were also examined. We then proposed and evaluated a heuristic algorithm to dynamically determine the optimal thread pool size. The simulation results show that this approach is effective in improving overall application performance.

  5. Performance of laser guide star adaptive optics at Lick Observatory

    SciTech Connect

    Olivier, S.S.; An, J.; Avicola, K.

    1995-07-19

    A sodium-layer laser guide star adaptive optics system has been developed at Lawrence Livermore National Laboratory (LLNL) for use on the 3-meter Shane telescope at Lick Observatory. The system is based on a 127-actuator continuous-surface deformable mirror, a Hartmann wavefront sensor equipped with a fast-framing low-noise CCD camera, and a pulsed solid-state-pumped dye laser tuned to the atomic sodium resonance line at 589 nm. The adaptive optics system has been tested on the Shane telescope using natural reference stars yielding up to a factor of 12 increase in image peak intensity and a factor of 6.5 reduction in image full width at half maximum (FWHM). The results are consistent with theoretical expectations. The laser guide star system has been installed and operated on the Shane telescope yielding a beam with 22 W average power at 589 nm. Based on experimental data, this laser should generate an 8th magnitude guide star at this site, and the integrated laser guide star adaptive optics system should produce images with Strehl ratios of 0.4 at 2.2 {mu}m in median seeing and 0.7 at 2.2 {mu}m in good seeing.

  6. Multi-Scale Simulation and Optimization of Lithium Battery Performance

    NASA Astrophysics Data System (ADS)

    Golmon, Stephanie L.

    The performance and degradation of lithium batteries strongly depends on electrochemical, mechanical, and thermal phenomena. While a large volume of work has focused on thermal management, mechanical phenomena relevant to battery design are not fully understood. Mechanical degradation of electrode particles has been experimentally linked to capacity fade and failure of batteries; an understanding of the interplay between mechanics and electrochemistry in the battery is necessary in order to improve the overall performance of the battery. A multi-scale model to simulate the coupled electrochemical and mechanical behavior of Li batteries has been developed, which models the porous electrode and separator regions of the battery. The porous electrode includes a liquid electrolyte and solid active materials. A multi-scale finite element approach is used to analyze the electrochemical and mechanical performance. The multi-scale model includes a macro- and micro-scale with analytical volume-averaging methods to relate the scales. The macro-scale model describes Li-ion transport through the electrolyte, electric potentials, and displacements throughout the battery. The micro-scale considers the surface kinetics and electrochemical and mechanical response of a single particle of active material evaluated locally within the cathode region. Both scales are non-linear and dependent on the other. The electrochemical and mechanical response of the battery are highly dependent on the porosity in the electrode, the active material particle size, and discharge rate. Balancing these parameters can improve the overall performance of the battery. A formal design optimization approach with multi-scale adjoint sensitivity analysis is developed to find optimal designs to improve the performance of the battery model. Optimal electrode designs are presented which maximize the capacity of the battery while mitigating stress levels during discharge over a range of discharge rates.

  7. Optimization of Transient Heat Exchanger Performance for Improved Energy Efficiency

    NASA Astrophysics Data System (ADS)

    Bran Anleu, Gabriela; Kavehpour, Pirouz; Lavine, Adrienne; Wirz, Richard

    2014-11-01

    Heat exchangers are used in a multitude of applications within systems for energy generation, energy conversion, or energy storage. Many of these systems (e.g. solar power plants) function under transient conditions, but the design of the heat exchangers is typically optimized assuming steady state conditions. There is a potential for significant energy savings if the transient behavior of the heat exchanger is taken into account in designing the heat exchanger by optimizing its operating conditions in relation to the transient behavior of the overall system. The physics of the transient behavior of a heat exchanger needs to be understood to provide design parameters for transient heat exchangers to deliver energy savings. A numerical model was used to determine the optimized mass flow rates thermal properties for a thermal energy storage system. The transient behavior is strongly linked to the dimensionless parameters relating fluid properties, the mass flow rates, and the temperature of the fluids at the inlet of each stream. Smart metals, or advanced heat exchanger surface geometries and methods of construction will be used to meet the three goals mentioned before: 1) energy and cost reduction, 2) size reduction, and 3) optimal performance for all modes of operation.

  8. USING AN ADAPTER TO PERFORM THE CHALFANT-STYLE CONTAINMENT VESSEL PERIODIC MAINTENANCE LEAK RATE TEST

    SciTech Connect

    Loftin, B.; Abramczyk, G.; Trapp, D.

    2011-06-03

    Recently the Packaging Technology and Pressurized Systems (PT&PS) organization at the Savannah River National Laboratory was asked to develop an adapter for performing the leak-rate test of a Chalfant-style containment vessel. The PT&PS organization collaborated with designers at the Department of Energy's Pantex Plant to develop the adapter currently in use for performing the leak-rate testing on the containment vessels. This paper will give the history of leak-rate testing of the Chalfant-style containment vessels, discuss the design concept for the adapter, give an overview of the design, and will present results of the testing done using the adapter.

  9. Applying Computer Adaptive Testing to Optimize Online Assessment of Suicidal Behavior: A Simulation Study

    PubMed Central

    de Vries, Anton LM; de Groot, Marieke H; de Keijser, Jos; Kerkhof, Ad JFM

    2014-01-01

    Background The Internet is used increasingly for both suicide research and prevention. To optimize online assessment of suicidal patients, there is a need for short, good-quality tools to assess elevated risk of future suicidal behavior. Computer adaptive testing (CAT) can be used to reduce response burden and improve accuracy, and make the available pencil-and-paper tools more appropriate for online administration. Objective The aim was to test whether an item response–based computer adaptive simulation can be used to reduce the length of the Beck Scale for Suicide Ideation (BSS). Methods The data used for our simulation was obtained from a large multicenter trial from The Netherlands: the Professionals in Training to STOP suicide (PITSTOP suicide) study. We applied a principal components analysis (PCA), confirmatory factor analysis (CFA), a graded response model (GRM), and simulated a CAT. Results The scores of 505 patients were analyzed. Psychometric analyses showed the questionnaire to be unidimensional with good internal consistency. The computer adaptive simulation showed that for the estimation of elevation of risk of future suicidal behavior 4 items (instead of the full 19) were sufficient, on average. Conclusions This study demonstrated that CAT can be applied successfully to reduce the length of the Dutch version of the BSS. We argue that the use of CAT can improve the accuracy and the response burden when assessing the risk of future suicidal behavior online. Because CAT can be daunting for clinicians and applied scientists, we offer a concrete example of our computer adaptive simulation of the Dutch version of the BSS at the end of the paper. PMID:25213259

  10. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  11. On the performance of linear decreasing inertia weight particle swarm optimization for global optimization.

    PubMed

    Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka

    2013-01-01

    Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted. PMID:24324383

  12. Real-time motion-adaptive-optimization (MAO) in TomoTherapy

    NASA Astrophysics Data System (ADS)

    Lu, Weiguo; Chen, Mingli; Ruchala, Kenneth J.; Chen, Quan; Langen, Katja M.; Kupelian, Patrick A.; Olivera, Gustavo H.

    2009-07-01

    IMRT delivery follows a planned leaf sequence, which is optimized before treatment delivery. However, it is hard to model real-time variations, such as respiration, in the planning procedure. In this paper, we propose a negative feedback system of IMRT delivery that incorporates real-time optimization to account for intra-fraction motion. Specifically, we developed a feasible workflow of real-time motion-adaptive-optimization (MAO) for TomoTherapy delivery. TomoTherapy delivery is characterized by thousands of projections with a fast projection rate and ultra-fast binary leaf motion. The technique of MAO-guided delivery calculates (i) the motion-encoded dose that has been delivered up to any given projection during the delivery and (ii) the future dose that will be delivered based on the estimated motion probability and future fluence map. These two pieces of information are then used to optimize the leaf open time of the upcoming projection right before its delivery. It consists of several real-time procedures, including 'motion detection and prediction', 'delivered dose accumulation', 'future dose estimation' and 'projection optimization'. Real-time MAO requires that all procedures are executed in time less than the duration of a projection. We implemented and tested this technique using a TomoTherapy® research system. The MAO calculation took about 100 ms per projection. We calculated and compared MAO-guided delivery with two other types of delivery, motion-without-compensation delivery (MD) and static delivery (SD), using simulated 1D cases, real TomoTherapy plans and the motion traces from clinical lung and prostate patients. The results showed that the proposed technique effectively compensated for motion errors of all test cases. Dose distributions and DVHs of MAO-guided delivery approached those of SD, for regular and irregular respiration with a peak-to-peak amplitude of 3 cm, and for medium and large prostate motions. The results conceptually proved that

  13. Adaptive optimization of reference intensity for optical coherence imaging using galvanometric mirror tilting method

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2015-09-01

    Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.

  14. Optimal adaptive two-stage designs for phase II cancer clinical trials.

    PubMed

    Englert, Stefan; Kieser, Meinhard

    2013-11-01

    In oncology, single-arm two-stage designs with binary endpoint are widely applied in phase II for the development of cytotoxic cancer therapies. Simon's optimal design with prefixed sample sizes in both stages minimizes the expected sample size under the null hypothesis and is one of the most popular designs. The search algorithms that are currently used to identify phase II designs showing prespecified characteristics are computationally intensive. For this reason, most authors impose restrictions on their search procedure. However, it remains unclear to what extent this approach influences the optimality of the resulting designs. This article describes an extension to fixed sample size phase II designs by allowing the sample size of stage two to depend on the number of responses observed in the first stage. Furthermore, we present a more efficient numerical algorithm that allows for an exhaustive search of designs. Comparisons between designs presented in the literature and the proposed optimal adaptive designs show that while the improvements are generally moderate, notable reductions in the average sample size can be achieved for specific parameter constellations when applying the new method and search strategy. PMID:23868324

  15. Multiobjective adaptive surrogate modeling-based optimization for parameter estimation of large, complex geophysical models

    NASA Astrophysics Data System (ADS)

    Gong, Wei; Duan, Qingyun; Li, Jianduo; Wang, Chen; Di, Zhenhua; Ye, Aizhong; Miao, Chiyuan; Dai, Yongjiu

    2016-03-01

    Parameter specification is an important source of uncertainty in large, complex geophysical models. These models generally have multiple model outputs that require multiobjective optimization algorithms. Although such algorithms have long been available, they usually require a large number of model runs and are therefore computationally expensive for large, complex dynamic models. In this paper, a multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) algorithm is introduced that aims to reduce computational cost while maintaining optimization effectiveness. Geophysical dynamic models usually have a prior parameterization scheme derived from the physical processes involved, and our goal is to improve all of the objectives by parameter calibration. In this study, we developed a method for directing the search processes toward the region that can improve all of the objectives simultaneously. We tested the MO-ASMO algorithm against NSGA-II and SUMO with 13 test functions and a land surface model - the Common Land Model (CoLM). The results demonstrated the effectiveness and efficiency of MO-ASMO.

  16. Optimization of wind farm performance using low-order models

    NASA Astrophysics Data System (ADS)

    Dabiri, John; Brownstein, Ian

    2015-11-01

    A low order model that captures the dominant flow behaviors in a vertical-axis wind turbine (VAWT) array is used to maximize the power output of wind farms utilizing VAWTs. The leaky Rankine body model (LRB) was shown by Araya et al. (JRSE 2014) to predict the ranking of individual turbine performances in an array to within measurement uncertainty as compared to field data collected from full-scale VAWTs. Further, this model is able to predict array performance with significantly less computational expense than higher fidelity numerical simulations of the flow, making it ideal for use in optimization of wind farm performance. This presentation will explore the ability of the LRB model to rank the relative power output of different wind turbine array configurations as well as the ranking of individual array performance over a variety of wind directions, using various complex configurations tested in the field and simpler configurations tested in a wind tunnel. Results will be presented in which the model is used to determine array fitness in an evolutionary algorithm seeking to find optimal array configurations given a number of turbines, area of available land, and site wind direction profile. Comparison with field measurements will be presented.

  17. Optimizing Center Performance through Coordinated Data Staging, Scheduling and Recovery

    SciTech Connect

    Zhang, Zhe; Wang, Chao; Vazhkudai, Sudharshan S; Ma, Xiaosong; Pike, Gregory; Cobb, John W; Mueller, Frank

    2007-01-01

    Procurement and optimized utilization of Petascale supercomputers and centers is a renewed national priority. Sustained performance and availability of such large centers is a key technical challenge significantly impacting their usability. As recent research shows, storage systems can be a primary fault source leading to unavailability of even today's supercomputer. Due to data unavailability, jobs are frequently resubmitted resulting in reduced compute center performance as well as in a lack of coordination between I/O activities and job scheduling. In this work, we explore two mechanisms, namely the coordination of job scheduling and data staging/offloading and on-demand job input data reconstruction to address the availability of job input/output data and to improve center-wide performance. Fundamental to both mechanisms is the efficient management of transient data: in the way it is scheduled and recovered. Collectively, from a center standpoint, these techniques optimize resource usage and increase its data/service availability. From a user job standpoint, they reduce job turnaround time and optimize the usage of allocated time. We have implemented our approaches within commonly used supercomputer software tools such as the PBS scheduler and the Lustre parallel file system. We have gathered reconstruction data from a production supercomputer environment using multiple data sources. We conducted simulations based on the measured data recovery performance, the job traces and staged data logs from leadership-class supercomputer centers. Our results indicate that the average waiting time of jobs is reduced. This trend increases significantly for larger jobs and also as data is striped over more I/O nodes.

  18. Adaptive Evolution of Synthetic Cooperating Communities Improves Growth Performance

    PubMed Central

    Zhang, Xiaolin; Reed, Jennifer L.

    2014-01-01

    Symbiotic interactions between organisms are important for human health and biotechnological applications. Microbial mutualism is a widespread phenomenon and is important in maintaining natural microbial communities. Although cooperative interactions are prevalent in nature, little is known about the processes that allow their initial establishment, govern population dynamics and affect evolutionary processes. To investigate cooperative interactions between bacteria, we constructed, characterized, and adaptively evolved a synthetic community comprised of leucine and lysine Escherichia coli auxotrophs. The co-culture can grow in glucose minimal medium only if the two auxotrophs exchange essential metabolites — lysine and leucine (or its precursors). Our experiments showed that a viable co-culture using these two auxotrophs could be established and adaptively evolved to increase growth rates (by ∼3 fold) and optical densities. While independently evolved co-cultures achieved similar improvements in growth, they took different evolutionary trajectories leading to different community compositions. Experiments with individual isolates from these evolved co-cultures showed that changes in both the leucine and lysine auxotrophs improved growth of the co-culture. Interestingly, while evolved isolates increased growth of co-cultures, they exhibited decreased growth in mono-culture (in the presence of leucine or lysine). A genome-scale metabolic model of the co-culture was also constructed and used to investigate the effects of amino acid (leucine or lysine) release and uptake rates on growth and composition of the co-culture. When the metabolic model was constrained by the estimated leucine and lysine release rates, the model predictions agreed well with experimental growth rates and composition measurements. While this study and others have focused on cooperative interactions amongst community members, the adaptive evolution of communities with other types of

  19. Multidisciplinary Design Optimization of a Full Vehicle with High Performance Computing

    NASA Technical Reports Server (NTRS)

    Yang, R. J.; Gu, L.; Tho, C. H.; Sobieszczanski-Sobieski, Jaroslaw

    2001-01-01

    Multidisciplinary design optimization (MDO) of a full vehicle under the constraints of crashworthiness, NVH (Noise, Vibration and Harshness), durability, and other performance attributes is one of the imperative goals for automotive industry. However, it is often infeasible due to the lack of computational resources, robust simulation capabilities, and efficient optimization methodologies. This paper intends to move closer towards that goal by using parallel computers for the intensive computation and combining different approximations for dissimilar analyses in the MDO process. The MDO process presented in this paper is an extension of the previous work reported by Sobieski et al. In addition to the roof crush, two full vehicle crash modes are added: full frontal impact and 50% frontal offset crash. Instead of using an adaptive polynomial response surface method, this paper employs a DOE/RSM method for exploring the design space and constructing highly nonlinear crash functions. Two NMO strategies are used and results are compared. This paper demonstrates that with high performance computing, a conventionally intractable real world full vehicle multidisciplinary optimization problem considering all performance attributes with large number of design variables become feasible.

  20. Parallel performance optimizations on unstructured mesh-based simulations

    SciTech Connect

    Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; Huck, Kevin; Hollingsworth, Jeffrey; Malony, Allen; Williams, Samuel; Oliker, Leonid

    2015-06-01

    This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches. We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.

  1. LAMMPS strong scaling performance optimization on Blue Gene/Q

    SciTech Connect

    Coffman, Paul; Jiang, Wei; Romero, Nichols A.

    2014-11-12

    LAMMPS "Large-scale Atomic/Molecular Massively Parallel Simulator" is an open-source molecular dynamics package from Sandia National Laboratories. Significant performance improvements in strong-scaling and time-to-solution for this application on IBM's Blue Gene/Q have been achieved through computational optimizations of the OpenMP versions of the short-range Lennard-Jones term of the CHARMM force field and the long-range Coulombic interaction implemented with the PPPM (particle-particle-particle mesh) algorithm, enhanced by runtime parameter settings controlling thread utilization. Additionally, MPI communication performance improvements were made to the PPPM calculation by re-engineering the parallel 3D FFT to use MPICH collectives instead of point-to-point. Performance testing was done using an 8.4-million atom simulation scaling up to 16 racks on the Mira system at Argonne Leadership Computing Facility (ALCF). Speedups resulting from this effort were in some cases over 2x.

  2. Optimization of the mammalian respiratory system: symmorphosis versus single species adaptation.

    PubMed

    Jones, J H

    1998-05-01

    Taylor and Weibel's principle of symmorphosis hypothesized optimal design of the mammalian respiratory system, with no excess structure relative to its maximal O2 flux, VO2max. Although they found symmorphosis not to be a general principle of design, it might apply to a highly adapted aerobic athlete, e.g. the Thoroughbred racehorse. Using a mathematical model based on empirical data of the equine O2 transport system at normoxic VO2max, the fraction of the total limitation to O2 flux contributed by each of the respiratory transport steps is calculated as either the fractional change (F) in VO2max for a 1% change in each component, or as the fraction of total O2 pressure drop (R(int)) across each component at VO2max. When calculated as F, alveolar ventilation (VA) and pulmonary diffusing capacity (DLO2) are major limiting factors, circulatory convection (Q) is nearly as limiting, and peripheral tissue diffusing capacity (DTO2) is only one-third as important. When calculated as R(int), DLO2 is the major factor, VA and DTO2 contribute significantly, and Q is smallest. These patterns contrast with analogous studies in humans, in which Q is the single major limiting factor. The results suggest that strong selection for aerobic power in horses has maximized the malleable components of their respiratory systems until the least malleable structure, the lungs, has become a major limitation to O2 flux. Symmorphosis cannot determine if such a design is or is not optimized, as every system falls on a continuous distribution of relative optimization among species. However, the concept of symmorphosis is useful for establishing a framework within which a single species can be compared with a quantitatively defined hypothesis of optimal animal design, and compared with other species according to those criteria. PMID:9787782

  3. An Adaptive Intelligent Integrated Lighting Control Approach for High-Performance Office Buildings

    NASA Astrophysics Data System (ADS)

    Karizi, Nasim

    An acute and crucial societal problem is the energy consumed in existing commercial buildings. There are 1.5 million commercial buildings in the U.S. with only about 3% being built each year. Hence, existing buildings need to be properly operated and maintained for several decades. Application of integrated centralized control systems in buildings could lead to more than 50% energy savings. This research work demonstrates an innovative adaptive integrated lighting control approach which could achieve significant energy savings and increase indoor comfort in high performance office buildings. In the first phase of the study, a predictive algorithm was developed and validated through experiments in an actual test room. The objective was to regulate daylight on a specified work plane by controlling the blind slat angles. Furthermore, a sensor-based integrated adaptive lighting controller was designed in Simulink which included an innovative sensor optimization approach based on genetic algorithm to minimize the number of sensors and efficiently place them in the office. The controller was designed based on simple integral controllers. The objective of developed control algorithm was to improve the illuminance situation in the office through controlling the daylight and electrical lighting. To evaluate the performance of the system, the controller was applied on experimental office model in Lee et al.'s research study in 1998. The result of the developed control approach indicate a significantly improvement in lighting situation and 1-23% and 50-78% monthly electrical energy savings in the office model, compared to two static strategies when the blinds were left open and closed during the whole year respectively.

  4. Parallel Performance Optimization of the Direct Simulation Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Gao, Da; Zhang, Chonglin; Schwartzentruber, Thomas

    2009-11-01

    Although the direct simulation Monte Carlo (DSMC) particle method is more computationally intensive compared to continuum methods, it is accurate for conditions ranging from continuum to free-molecular, accurate in highly non-equilibrium flow regions, and holds potential for incorporating advanced molecular-based models for gas-phase and gas-surface interactions. As available computer resources continue their rapid growth, the DSMC method is continually being applied to increasingly complex flow problems. Although processor clock speed continues to increase, a trend of increasing multi-core-per-node parallel architectures is emerging. To effectively utilize such current and future parallel computing systems, a combined shared/distributed memory parallel implementation (using both Open Multi-Processing (OpenMP) and Message Passing Interface (MPI)) of the DSMC method is under development. The parallel implementation of a new state-of-the-art 3D DSMC code employing an embedded 3-level Cartesian mesh will be outlined. The presentation will focus on performance optimization strategies for DSMC, which includes, but is not limited to, modified algorithm designs, practical code-tuning techniques, and parallel performance optimization. Specifically, key issues important to the DSMC shared memory (OpenMP) parallel performance are identified as (1) granularity (2) load balancing (3) locality and (4) synchronization. Challenges and solutions associated with these issues as they pertain to the DSMC method will be discussed.

  5. Optimizing sensor packaging costs and performances in environmental applications

    NASA Astrophysics Data System (ADS)

    Gandelli, Alessandro; Grimaccia, Francesco; Zich, Riccardo E.

    2005-02-01

    Sensor packaging has been identified as one of the most significant areas of research for enabling sensor usage in harsh environments for several application fields. Protection is one of the primary goals of sensor packaging; however, research deals not only with robust and resistant packages optimization, but also with electromagnetic performance. On the other hand, from the economic point of view, wireless sensor networks present hundreds of thousands of small sensors, namely motes, whose costs should be reduced at the lowest level, thus driving low the packaging cost also. So far, packaging issues have not been extended to such topics because these products are not yet in the advanced production cycle. However, in order to guarantee high EMC performance and low packaging costs, it is necessary to address the packaging strategy from the very beginning. Technological improvements that impacts on production time and costs can be suitable organized by anticipating the above mentioned issues in the development and design of the motes, obtaining in this way a significant reduction of final efforts for optimization. The paper addresses the development and production techniques necessary to identify the real needs in such a field and provides the suitable strategies to enhance industrial performance of high-volumes productions. Moreover the electrical and mechanical characteristics of these devices are reviewed and better identified in function of the environmental requirements and electromagnetic compatibility. Future developments complete the scenario and introduce the next mote generation characterized by a cost lower by an order of magnitude.

  6. Automatic treatment plan re-optimization for adaptive radiotherapy guided with the initial plan DVHs

    NASA Astrophysics Data System (ADS)

    Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Jiang Graves, Yan; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve

    2013-12-01

    Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using

  7. Automatic treatment plan re-optimization for adaptive radiotherapy guided with the initial plan DVHs.

    PubMed

    Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Graves, Yan Jiang; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve

    2013-12-21

    Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using

  8. Optimal reconstruction for closed-loop ground-layer adaptive optics with elongated spots.

    PubMed

    Béchet, Clémentine; Tallon, Michel; Tallon-Bosc, Isabelle; Thiébaut, Éric; Le Louarn, Miska; Clare, Richard M

    2010-11-01

    The design of the laser-guide-star-based adaptive optics (AO) systems for the Extremely Large Telescopes requires careful study of the issue of elongated spots produced on Shack-Hartmann wavefront sensors. The importance of a correct modeling of the nonuniformity and correlations of the noise induced by this elongation has already been demonstrated for wavefront reconstruction. We report here on the first (to our knowledge) end-to-end simulations of closed-loop ground-layer AO with laser guide stars with such an improved noise model. The results are compared with the level of performance predicted by a classical noise model for the reconstruction. The performance is studied in terms of ensquared energy and confirms that, thanks to the improved noise model, central or side launching of the lasers does not affect the performance with respect to the laser guide stars' flux. These two launching schemes also perform similarly whatever the atmospheric turbulence strength. PMID:21045872

  9. An adaptive-management framework for optimal control of hiking near golden eagle nests in Denali National Park

    USGS Publications Warehouse

    Martin, Julien; Fackler, Paul L.; Nichols, James D.; Runge, Michael C.; McIntyre, Carol L.; Lubow, Bruce L.; McCluskie, Maggie C.; Schmutz, Joel A.

    2011-01-01

    Unintended effects of recreational activities in protected areas are of growing concern. We used an adaptive-management framework to develop guidelines for optimally managing hiking activities to maintain desired levels of territory occupancy and reproductive success of Golden Eagles (Aquila chrysaetos) in Denali National Park (Alaska, U.S.A.). The management decision was to restrict human access (hikers) to particular nesting territories to reduce disturbance. The management objective was to minimize restrictions on hikers while maintaining reproductive performance of eagles above some specified level. We based our decision analysis on predictive models of site occupancy of eagles developed using a combination of expert opinion and data collected from 93 eagle territories over 20 years. The best predictive model showed that restricting human access to eagle territories had little effect on occupancy dynamics. However, when considering important sources of uncertainty in the models, including environmental stochasticity, imperfect detection of hares on which eagles prey, and model uncertainty, restricting access of territories to hikers improved eagle reproduction substantially. An adaptive management framework such as ours may help reduce uncertainty of the effects of hiking activities on Golden Eagles

  10. Adaptive Optics Images of the Galactic Center: Using Empirical Noise-maps to Optimize Image Analysis

    NASA Astrophysics Data System (ADS)

    Albers, Saundra; Witzel, Gunther; Meyer, Leo; Sitarski, Breann; Boehle, Anna; Ghez, Andrea M.

    2015-01-01

    Adaptive Optics images are one of the most important tools in studying our Galactic Center. In-depth knowledge of the noise characteristics is crucial to optimally analyze this data. Empirical noise estimates - often represented by a constant value for the entire image - can be greatly improved by computing the local detector properties and photon noise contributions pixel by pixel. To comprehensively determine the noise, we create a noise model for each image using the three main contributors—photon noise of stellar sources, sky noise, and dark noise. We propagate the uncertainties through all reduction steps and analyze the resulting map using Starfinder. The estimation of local noise properties helps to eliminate fake detections while improving the detection limit of fainter sources. We predict that a rigorous understanding of noise allows a more robust investigation of the stellar dynamics in the center of our Galaxy.

  11. An 'optimal' spawning algorithm for adaptive basis set expansion in nonadiabatic dynamics

    SciTech Connect

    Yang, Sandy; Coe, Joshua D.; Kaduk, Benjamin; Martinez, Todd J.

    2009-04-07

    The full multiple spawning (FMS) method has been developed to simulate quantum dynamics in the multistate electronic problem. In FMS, the nuclear wave function is represented in a basis of coupled, frozen Gaussians, and a 'spawning' procedure prescribes a means of adaptively increasing the size of this basis in order to capture population transfer between electronic states. Herein we detail a new algorithm for specifying the initial conditions of newly spawned basis functions that minimizes the number of spawned basis functions needed for convergence. 'Optimally' spawned basis functions are placed to maximize the coupling between parent and child trajectories at the point of spawning. The method is tested with a two-state, one-mode avoided crossing model and a two-state, two-mode conical intersection model.

  12. Real-time optimal adaptation for planetary geometry and texture: 4-8 tile hierarchies.

    PubMed

    Hwa, Lok M; Duchaineau, Mark A; Joy, Kenneth I

    2005-01-01

    The real-time display of huge geometry and imagery databases involves view-dependent approximations, typically through the use of precomputed hierarchies that are selectively refined at runtime. A classic motivating problem is terrain visualization in which planetary databases involving billions of elevation and color values are displayed on PC graphics hardware at high frame rates. This paper introduces a new diamond data structure for the basic selective-refinement processing, which is a streamlined method of representing the well-known hierarchies of right triangles that have enjoyed much success in real-time, view-dependent terrain display. Regular-grid tiles are proposed as the payload data per diamond for both geometry and texture. The use of 4-8 grid refinement and coarsening schemes allows level-of-detail transitions that are twice as gradual as traditional quadtree-based hierarchies, as well as very high-quality low-pass filtering compared to subsampling-based hierarchies. An out-of-core storage organization is introduced based on Sierpinski indices per diamond, along with a tile preprocessing framework based on fine-to-coarse, same-level, and coarse-to-fine gathering operations. To attain optimal frame-to-frame coherence and processing-order priorities, dual split and merge queues are developed similar to the Realtime Optimally Adapting Meshes (ROAM) Algorithm, as well as an adaptation of the ROAM frustum culling technique. Example applications of lake-detection and procedural terrain generation demonstrate the flexibility of the tile processing framework. PMID:16138547

  13. Functional relationship between cognitive representations of movement directions and visuomotor adaptation performance.

    PubMed

    Lex, Heiko; Weigelt, Matthias; Knoblauch, Andreas; Schack, Thomas

    2012-12-01

    The aim of our study was to explore whether or not different types of learners in a sensorimotor task possess characteristically different cognitive representations. Participants' sensorimotor adaptation performance was measured with a pointing paradigm which used a distortion of the visual feedback in terms of a left-right reversal. The structure of cognitive representations was assessed using a newly established experimental method, the Cognitive Measurement of Represented Directions. A post hoc analysis revealed inter-individual differences in participants' adaptation performance, and three different skill levels (skilled, average, and poor adapters) have been defined. These differences in performance were correlated with the structure of participants' cognitive representations of movement directions. Analysis of these cognitive representations revealed performance advantages for participants possessing a global cognitive representation of movement directions (aligned to cardinal movement axes), rather than a local representation (aligned to each neighboring direction). Our findings are evidence that cognitive representation structures play a functional role in adaptation performance. PMID:23007723

  14. A multilevel examination of the relationships among training outcomes, mediating regulatory processes, and adaptive performance.

    PubMed

    Chen, Gilad; Thomas, Brian; Wallace, J Craig

    2005-09-01

    This study examined whether cognitive, affective-motivational, and behavioral training outcomes relate to posttraining regulatory processes and adaptive performance similarly at the individual and team levels of analysis. Longitudinal data were collected from 156 individuals composing 78 teams who were trained on and then performed a simulated flight task. Results showed that posttraining regulation processes related similarly to adaptive performance across levels. Also, regulation processes fully mediated the influences of self- and collective efficacy beliefs on individual and team adaptive performance. Finally, knowledge and skill more strongly and directly related to adaptive performance at the individual than the team level of analysis. Implications to theory and practice, limitations, and future directions are discussed. PMID:16162057

  15. A Study on the Self-Adaption Incentive Performance Salary

    NASA Astrophysics Data System (ADS)

    Zhang, Chuanming; Wang, Yang

    In project managing, the performance salary management mode is often used to motivate project managers and other similar staff to improve performance or reduce the cost. But the engineering activities who own a lot of internal and external uncertain factors can not be known by the principle. It is difficult for to develop a suitable incentive target to project managers etch. This paper thinks that the manager self master the maximum of information on engineering activities. So this paper sets up an incentive model: the project managers themselves report performance objectives; owner gives the managers reward or punishment combined with their reported performance and actual performance. The model to ensure that the project manager is only accurate self reported its results to get the maximum profit. At the same time, it cans incentive managers to improve performance or reduce the cost. This paper focuses on setting up the model, analyzing the model parameters. And cite an example analyze them.

  16. Examining the Relationship between Learning Organization Characteristics and Change Adaptation, Innovation, and Organizational Performance

    ERIC Educational Resources Information Center

    Kontoghiorghes, Constantine; Awbre, Susan M.; Feurig, Pamela L.

    2005-01-01

    The main purpose of this exploratory study was to examine the relationship between certain learning organization characteristics and change adaptation, innovation, and bottom-line organizational performance. The following learning organization characteristics were found to be the strongest predictors of rapid change adaptation, quick product or…

  17. GPU-based ultra-fast direct aperture optimization for online adaptive radiation therapy

    NASA Astrophysics Data System (ADS)

    Men, Chunhua; Jia, Xun; Jiang, Steve B.

    2010-08-01

    Online adaptive radiation therapy (ART) has great promise to significantly reduce normal tissue toxicity and/or improve tumor control through real-time treatment adaptations based on the current patient anatomy. However, the major technical obstacle for clinical realization of online ART, namely the inability to achieve real-time efficiency in treatment re-planning, has yet to be solved. To overcome this challenge, this paper presents our work on the implementation of an intensity-modulated radiation therapy (IMRT) direct aperture optimization (DAO) algorithm on the graphics processing unit (GPU) based on our previous work on the CPU. We formulate the DAO problem as a large-scale convex programming problem, and use an exact method called the column generation approach to deal with its extremely large dimensionality on the GPU. Five 9-field prostate and five 5-field head-and-neck IMRT clinical cases with 5 × 5 mm2 beamlet size and 2.5 × 2.5 × 2.5 mm3 voxel size were tested to evaluate our algorithm on the GPU. It takes only 0.7-3.8 s for our implementation to generate high-quality treatment plans on an NVIDIA Tesla C1060 GPU card. Our work has therefore solved a major problem in developing ultra-fast (re-)planning technologies for online ART.

  18. A novel adaptive compression method for hyperspectral images by using EDT and particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Ghamisi, Pedram; Kumar, Lalit

    2012-01-01

    Hyperspectral sensors generate useful information about climate and the earth surface in numerous contiguous narrow spectral bands, and are widely used in resource management, agriculture, environmental monitoring, etc. Compression of the hyperspectral data helps in long-term storage and transmission systems. Lossless compression is preferred for high-detail data, such as hyperspectral data. Due to high redundancy in neighboring spectral bands and the tendency to achieve a higher compression ratio, using adaptive coding methods for hyperspectral data seems suitable for this purpose. This paper introduces two new compression methods. One of these methods is adaptive and powerful for the compression of hyperspectral data, which is based on separating the bands with different specifications by the histogram and Binary Particle Swarm Optimization (BPSO) and compressing each one a different manner. The new proposed methods improve the compression ratio of the JPEG standards and save storage space the transmission. The proposed methods are applied on different test cases, and the results are evaluated and compared with some other compression methods, such as lossless JPEG and JPEG2000.

  19. Ambient illumination revisited: A new adaptation-based approach for optimizing medical imaging reading environments

    SciTech Connect

    Chawla, Amarpreet S.; Samei, Ehsan

    2007-01-15

    Ambient lighting in soft-copy reading rooms is currently kept at low values to preserve contrast rendition in the dark regions of a medical image. Low illuminance levels, however, create inadequate viewing conditions and may also cause eye strain. This eye strain may be potentially attributed to notable variations in the luminance adaptation state of the reader's eyes when moving the gaze intermittently between the brighter display and darker surrounding surfaces. This paper presents a methodology to minimize this variation and optimize the lighting conditions of reading rooms by exploiting the properties of liquid crystal displays (LCDs) with low diffuse reflection coefficients and high luminance ratio. First, a computational model was developed to determine a global luminance adaptation value, L{sub adp}, when viewing a medical image on display. The model is based on the diameter of the pupil size, which depends on the luminance of the observed object. Second, this value was compared with the luminance reflected off surrounding surfaces, L{sub s}, under various conditions of room illuminance, E, different values of diffuse reflection coefficients of surrounding surfaces, R{sub s}, and calibration settings of a typical LCD. The results suggest that for typical luminance settings of current LCDs, it is possible to raise ambient illumination to minimize differences in eye adaptation, potentially reducing visual fatigue while also complying with the TG18 specifications for controlled contrast rendition. Specifically, room illumination in the 75-150 lux range and surface diffuse reflection coefficients in the practical range of 0.13-0.22 sr{sup -1} provide an ideal setup for typical LCDs. Future LCDs with lower diffuse reflectivity and with higher inherent luminance ratios can provide further improvement of ergonomic viewing conditions in reading rooms.

  20. Characterization, performance and optimization of PVDF as a piezoelectric film for advanced space mirror concepts.

    SciTech Connect

    Jones, Gary D.; Assink, Roger Alan; Dargaville, Tim Richard; Chaplya, Pavel Mikhail; Clough, Roger Lee; Elliott, Julie M.; Martin, Jeffrey W.; Mowery, Daniel Michael; Celina, Mathew Christopher

    2005-11-01

    Piezoelectric polymers based on polyvinylidene fluoride (PVDF) are of interest for large aperture space-based telescopes as adaptive or smart materials. Dimensional adjustments of adaptive polymer films depend on controlled charge deposition. Predicting their long-term performance requires a detailed understanding of the piezoelectric material features, expected to suffer due to space environmental degradation. Hence, the degradation and performance of PVDF and its copolymers under various stress environments expected in low Earth orbit has been reviewed and investigated. Various experiments were conducted to expose these polymers to elevated temperature, vacuum UV, {gamma}-radiation and atomic oxygen. The resulting degradative processes were evaluated. The overall materials performance is governed by a combination of chemical and physical degradation processes. Molecular changes are primarily induced via radiative damage, and physical damage from temperature and atomic oxygen exposure is evident as depoling, loss of orientation and surface erosion. The effects of combined vacuum UV radiation and atomic oxygen resulted in expected surface erosion and pitting rates that determine the lifetime of thin films. Interestingly, the piezo responsiveness in the underlying bulk material remained largely unchanged. This study has delivered a comprehensive framework for material properties and degradation sensitivities with variations in individual polymer performances clearly apparent. The results provide guidance for material selection, qualification, optimization strategies, feedback for manufacturing and processing, or alternative materials. Further material qualification should be conducted via experiments under actual space conditions.

  1. Adapting sensory data for multiple robots performing spill cleanup

    SciTech Connect

    Storjohann, K.; Saltzen, E.

    1990-09-01

    This paper describes a possible method of converting a single performing robot algorithm into a multiple performing robot algorithm without the need to modify previously written codes. The algorithm to be converted involves spill detection and clean up by the HERMIES-III mobile robot. In order to achieve the goal of multiple performing robots with this algorithm, two steps are taken. First, the task is formally divided into two sub-tasks, spill detection and spill clean-up, the former of which is allocated to the added performing robot, HERMIES-IIB. Second, a inverse perspective mapping, is applied to the data acquired by the new performing robot (HERMIES-IIB), allowing the data to be processed by the previously written algorithm without re-writing the code. 6 refs., 4 figs.

  2. Adaptive Particle Swarm Optimizer with Varying Acceleration Coefficients for Finding the Most Stable Conformer of Small Molecules.

    PubMed

    Agrawal, Shikha; Silakari, Sanjay; Agrawal, Jitendra

    2015-11-01

    A novel parameter automation strategy for Particle Swarm Optimization called APSO (Adaptive PSO) is proposed. The algorithm is designed to efficiently control the local search and convergence to the global optimum solution. Parameters c1 controls the impact of the cognitive component on the particle trajectory and c2 controls the impact of the social component. Instead of fixing the value of c1 and c2 , this paper updates the value of these acceleration coefficients by considering time variation of evaluation function along with varying inertia weight factor in PSO. Here the maximum and minimum value of evaluation function is use to gradually decrease and increase the value of c1 and c2 respectively. Molecular energy minimization is one of the most challenging unsolved problems and it can be formulated as a global optimization problem. The aim of the present paper is to investigate the effect of newly developed APSO on the highly complex molecular potential energy function and to check the efficiency of the proposed algorithm to find the global minimum of the function under consideration. The proposed algorithm APSO is therefore applied in two cases: Firstly, for the minimization of a potential energy of small molecules with up to 100 degrees of freedom and finally for finding the global minimum energy conformation of 1,2,3-trichloro-1-flouro-propane molecule based on a realistic potential energy function. The computational results of all the cases show that the proposed method performs significantly better than the other algorithms. PMID:27491033

  3. Application of adaptive neuro-fuzzy inference system and cuckoo optimization algorithm for analyzing electro chemical machining process

    NASA Astrophysics Data System (ADS)

    Teimouri, Reza; Sohrabpoor, Hamed

    2013-12-01

    Electrochemical machining process (ECM) is increasing its importance due to some of the specific advantages which can be exploited during machining operation. The process offers several special privileges such as higher machining rate, better accuracy and control, and wider range of materials that can be machined. Contribution of too many predominate parameters in the process, makes its prediction and selection of optimal values really complex, especially while the process is programmized for machining of hard materials. In the present work in order to investigate effects of electrolyte concentration, electrolyte flow rate, applied voltage and feed rate on material removal rate (MRR) and surface roughness (SR) the adaptive neuro-fuzzy inference systems (ANFIS) have been used for creation predictive models based on experimental observations. Then the ANFIS 3D surfaces have been plotted for analyzing effects of process parameters on MRR and SR. Finally, the cuckoo optimization algorithm (COA) was used for selection solutions in which the process reaches maximum material removal rate and minimum surface roughness simultaneously. Results indicated that the ANFIS technique has superiority in modeling of MRR and SR with high prediction accuracy. Also, results obtained while applying of COA have been compared with those derived from confirmatory experiments which validate the applicability and suitability of the proposed techniques in enhancing the performance of ECM process.

  4. An optimal performance control scheme for a 3D crane

    NASA Astrophysics Data System (ADS)

    Maghsoudi, Mohammad Javad; Mohamed, Z.; Husain, A. R.; Tokhi, M. O.

    2016-01-01

    This paper presents an optimal performance control scheme for control of a three dimensional (3D) crane system including a Zero Vibration shaper which considers two control objectives concurrently. The control objectives are fast and accurate positioning of a trolley and minimum sway of a payload. A complete mathematical model of a lab-scaled 3D crane is simulated in Simulink. With a specific cost function the proposed controller is designed to cater both control objectives similar to a skilled operator. Simulation and experimental studies on a 3D crane show that the proposed controller has better performance as compared to a sequentially tuned PID-PID anti swing controller. The controller provides better position response with satisfactory payload sway in both rail and trolley responses. Experiments with different payloads and cable lengths show that the proposed controller is robust to changes in payload with satisfactory responses.

  5. Performance comparison of polynomial representations for optimizing optical freeform systems

    NASA Astrophysics Data System (ADS)

    Brömel, A.; Gross, H.; Ochse, D.; Lippmann, U.; Ma, C.; Zhong, Y.; Oleszko, M.

    2015-09-01

    Optical systems can benefit strongly from freeform surfaces, however the choice of the right representation isn`t an easy one. Classical representations like X-Y-polynomials, as well as Zernike-polynomials are often used for such systems, but should have some disadvantage regarding their orthogonality, resulting in worse convergence and reduced quality in final results compared to newer representations like the Q-polynomials by Forbes. Additionally the supported aperture is a circle, which can be a huge drawback in case of optical systems with rectangular aperture. In this case other representations like Chebyshev-or Legendre-polynomials come into focus. There are a larger number of possibilities; however the experience with these newer representations is rather limited. Therefore in this work the focus is on investigating the performance of four widely used representations in optimizing two ambitious systems with very different properties: Three-Mirror-Anastigmat and an anamorphic System. The chosen surface descriptions offer support for circular or rectangular aperture, as well as different grades of departure from rotational symmetry. The basic shapes are for example a conic or best-fit-sphere and the polynomial set is non-, spatial or slope-orthogonal. These surface representations were chosen to evaluate the impact of these aspects on the performance optimization of the two example systems. Freeform descriptions investigated here were XY-polynomials, Zernike in Fringe representation, Q-polynomials by Forbes, as well as 2-dimensional Chebyshev-polynomials. As a result recommendations for the right choice of freeform surface representations for practical issues in the optimization of optical systems can be given.

  6. Goal orientation and work role performance: predicting adaptive and proactive work role performance through self-leadership strategies.

    PubMed

    Marques-Quinteiro, Pedro; Curral, Luís Alberto

    2012-01-01

    This article explores the relationship between goal orientation, self-leadership dimensions, and adaptive and proactive work role performances. The authors hypothesize that learning orientation, in contrast to performance orientation, positively predicts proactive and adaptive work role performances and that this relationship is mediated by self-leadership behavior-focused strategies. It is posited that self-leadership natural reward strategies and thought pattern strategies are expected to moderate this relationship. Workers (N = 108) from a software company participated in this study. As expected, learning orientation did predict adaptive and proactive work role performance. Moreover, in the relationship between learning orientation and proactive work role performance through self-leadership behavior-focused strategies, a moderated mediation effect was found for self-leadership natural reward and thought pattern strategies. In the end, the authors discuss the results and implications are discussed and future research directions are proposed. PMID:23094471

  7. Human Performance Optimization: Culture Change and Paradigm Shift.

    PubMed

    Deuster, Patricia A; OʼConnor, Francis G

    2015-11-01

    The term "Human Performance Optimization" (HPO) emerged across the Department of Defense (DoD) around 2006 when the importance of human performance for military success on the battlefield was acknowledged. Likewise, the term Total Force Fitness (TFF) arose as a conceptual framework within DoD in response to the need for a more holistic approach to the unparalleled operational demands with multiple deployments and strains on the United States Armed Forces. Both HPO and TFF are frameworks for enhancing and sustaining the health, well-being, and performance among our warriors and their families; they are fundamental to accomplishing our nation's mission. A demands-resources model for HPO is presented within the context of TFF to assist in operationalizing actions to enhance performance. In addition, the role leaders can serve is discussed; leaders are uniquely postured in the military chain of command to directly influence a culture of fitness for a ready force, and promote the concept that service members are ultimately responsible for their fitness and performance. PMID:26506199

  8. Robust Multivariable Optimization and Performance Simulation for ASIC Design

    NASA Technical Reports Server (NTRS)

    DuMonthier, Jeffrey; Suarez, George

    2013-01-01

    Application-specific-integrated-circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power, and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem, which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques, which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable, are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way that facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as a framework of software modules, templates, and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation.

  9. Adaptive track scheduling to optimize concurrency and vectorization in GeantV

    NASA Astrophysics Data System (ADS)

    Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; Brun, R.; Canal, P.; Carminati, F.; De Fine Licht, J. C.; Duhem, L.; Elvira, V. D.; Gheata, A.; Jun, S. Y.; Lima, G.; Novak, M.; Sehgal, R.; Shadura, O.; Wenzel, S.

    2015-05-01

    The GeantV project is focused on the R&D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The model has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. This work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results.

  10. A Self-adaptive Evolutionary Algorithm for Multi-objective Optimization

    NASA Astrophysics Data System (ADS)

    Cao, Ruifen; Li, Guoli; Wu, Yican

    Evolutionary algorithm has gained a worldwide popularity among multi-objective optimization. The paper proposes a self-adaptive evolutionary algorithm (called SEA) for multi-objective optimization. In the SEA, the probability of crossover and mutation,P c and P m , are varied depending on the fitness values of the solutions. Fitness assignment of SEA realizes the twin goals of maintaining diversity in the population and guiding the population to the true Pareto Front; fitness value of individual not only depends on improved density estimation but also depends on non-dominated rank. The density estimation can keep diversity in all instances including when scalars of all objectives are much different from each other. SEA is compared against the Non-dominated Sorting Genetic Algorithm (NSGA-II) on a set of test problems introduced by the MOEA community. Simulated results show that SEA is as effective as NSGA-II in most of test functions, but when scalar of objectives are much different from each other, SEA has better distribution of non-dominated solutions.

  11. Development of an adaptive hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1994-01-01

    In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.

  12. The geometry of r-adaptive meshes generated using optimal transport methods

    NASA Astrophysics Data System (ADS)

    Budd, C. J.; Russell, R. D.; Walsh, E.

    2015-02-01

    The principles of mesh equidistribution and alignment play a fundamental role in the design of adaptive methods, and a metric tensor and mesh metric are useful theoretical tools for understanding a method's level of mesh alignment, or anisotropy. We consider a mesh redistribution method based on the Monge-Ampère equation which combines equidistribution of a given scalar density function with optimal transport. It does not involve explicit use of a metric tensor, although such a tensor must exist for the method, and an interesting question to ask is whether or not the alignment produced by the metric gives an anisotropic mesh. For model problems with a linear feature and with a radially symmetric feature, we derive the exact form of the metric, which involves expressions for its eigenvalues and eigenvectors. The eigenvectors are shown to be orthogonal and tangential to the feature, and the ratio of the eigenvalues (corresponding to the level of anisotropy) is shown to depend, both locally and globally, on the value of the density function and the amount of curvature. We thereby demonstrate how the optimal transport method produces an anisotropic mesh along a given feature while equidistributing a suitably chosen scalar density function. Numerical results are given to verify these results and to demonstrate how the analysis is useful for problems involving more complex features, including for a non-trivial time dependant nonlinear PDE which evolves narrow and curved reaction fronts.

  13. Adaptive Evolutionary Programming Incorporating Neural Network for Transient Stability Constrained Optimal Power Flow

    NASA Astrophysics Data System (ADS)

    Tangpatiphan, Kritsana; Yokoyama, Akihiko

    This paper presents an adaptive evolutionary programming incorporating neural network for solving transient stability constrained optimal power flow (TSCOPF). The proposed AEP method is an evolutionary programming (EP)-based algorithm, which adjusts its population size automatically during an optimization process. The artificial neural network, which classifies the AEP individual based on its stability degrees, is embedded into the search template to reduce the computational load caused by transient stability constraints. The fuel cost minimization is selected as the objective function of TSCOPF. The proposed method is tested on the IEEE 30-bus system with two types of the fuel cost functions, i.e. the conventional quadratic function and the quadratic function superimposed by sine component to model the cost curves without and with valve-point loading effect respectively. The numerical examples show that AEP is more effective than conventional EP in terms of computational speed, and when the neural network is incorporated into AEP, it can significantly reduce the computational time of TSCOPF. A study of the architecture of the neural network is also conducted and discussed. In addition, the effectiveness of the proposed method for solving TSCOPF with the consideration of multiple contingencies is manifested.

  14. Enabling the extended compact genetic algorithm for real-parameter optimization by using adaptive discretization.

    PubMed

    Chen, Ying-ping; Chen, Chao-Hong

    2010-01-01

    An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence. PMID:20210600

  15. Multiband RF pulses with improved performance via convex optimization

    NASA Astrophysics Data System (ADS)

    Shang, Hong; Larson, Peder E. Z.; Kerr, Adam; Reed, Galen; Sukumar, Subramaniam; Elkhaled, Adam; Gordon, Jeremy W.; Ohliger, Michael A.; Pauly, John M.; Lustig, Michael; Vigneron, Daniel B.

    2016-01-01

    Selective RF pulses are commonly designed with the desired profile as a low pass filter frequency response. However, for many MRI and NMR applications, the spectrum is sparse with signals existing at a few discrete resonant frequencies. By specifying a multiband profile and releasing the constraint on "don't-care" regions, the RF pulse performance can be improved to enable a shorter duration, sharper transition, or lower peak B1 amplitude. In this project, a framework for designing multiband RF pulses with improved performance was developed based on the Shinnar-Le Roux (SLR) algorithm and convex optimization. It can create several types of RF pulses with multiband magnitude profiles, arbitrary phase profiles and generalized flip angles. The advantage of this framework with a convex optimization approach is the flexible trade-off of different pulse characteristics. Designs for specialized selective RF pulses for balanced SSFP hyperpolarized (HP) 13C MRI, a dualband saturation RF pulse for 1H MR spectroscopy, and a pre-saturation pulse for HP 13C study were developed and tested.

  16. Carbon Material Optimized Biocathode for Improving Microbial Fuel Cell Performance

    PubMed Central

    Tursun, Hairti; Liu, Rui; Li, Jing; Abro, Rashid; Wang, Xiaohui; Gao, Yanmei; Li, Yuan

    2016-01-01

    To improve the performance of microbial fuel cells (MFCs), the biocathode electrode material of double-chamber was optimized. Alongside the basic carbon fiber brush, three carbon materials namely graphite granules, activated carbon granules (ACG) and activated carbon powder, were added to the cathode-chambers to improve power generation. The result shows that the addition of carbon materials increased the amount of available electroactive microbes on the electrode surface and thus promote oxygen reduction rate, which improved the generation performance of the MFCs. The Output current (external resistance = 1000 Ω) greatly increased after addition of the three carbon materials and maximum power densities in current stable phase increased by 47.4, 166.1, and 33.5%, respectively. Additionally, coulombic efficiencies of the MFC increased by 16.3, 64.3, and 20.1%, respectively. These results show that MFC when optimized with ACG show better power generation, higher chemical oxygen demands removal rate and coulombic efficiency. PMID:26858695

  17. Experiences performing conceptual design optimization of transport aircraft

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. D.; Sliwa, S. M.

    1984-01-01

    Optimum Preliminary Design of Transports (OPDOT) is a computer program developed at NASA Langley Research Center for evaluating the impact of new technologies upon transport aircraft. For example, it provides the capability to look at configurations which have been resized to take advantage of active controls and provide and indication of economic sensitivity to its use. Although this tool returns a conceptual design configuration as its output, it does not have the accuracy, in absolute terms, to yield satisfactory point designs for immediate use by aircraft manufacturers. However, the relative accuracy of comparing OPDOT-generated configurations while varying technological assumptions has been demonstrated to be highly reliable. Hence, OPDOT is a useful tool for ascertaining the synergistic benefits of active controls, composite structures, improved engine efficiencies and other advanced technological developments. The approach used by OPDOT is a direct numerical optimization of an economic performance index. A set of independent design variables is iterated, given a set of design constants and data. The design variables include wing geometry, tail geometry, fuselage size, and engine size. This iteration continues until the optimum performance index is found which satisfies all the constraint functions. The analyst interacts with OPDOT by varying the input parameters to either the constraint functions or the design constants. Note that the optimization of aircraft geometry parameters is equivalent to finding the ideal aircraft size, but with more degrees of freedom than classical design procedures will allow.

  18. Performance analysis and optimization of power plants with gas turbines

    NASA Astrophysics Data System (ADS)

    Besharati-Givi, Maryam

    The gas turbine is one of the most important applications for power generation. The purpose of this research is performance analysis and optimization of power plants by using different design systems at different operation conditions. In this research, accurate efficiency calculation and finding optimum values of efficiency for design of chiller inlet cooling and blade cooled gas turbine are investigated. This research shows how it is possible to find the optimum design for different operation conditions, like ambient temperature, relative humidity, turbine inlet temperature, and compressor pressure ratio. The simulated designs include the chiller, with varied COP and fogging cooling for a compressor. In addition, the overall thermal efficiency is improved by adding some design systems like reheat and regenerative heating. The other goal of this research focuses on the blade-cooled gas turbine for higher turbine inlet temperature, and consequently, higher efficiency. New film cooling equations, along with changing film cooling effectiveness for optimum cooling air requirement at the first-stage blades, and an internal and trailing edge cooling for the second stage, are innovated for optimal efficiency calculation. This research sets the groundwork for using the optimum value of efficiency calculation, while using inlet cooling and blade cooling designs. In the final step, the designed systems in the gas cycles are combined with a steam cycle for performance improvement.

  19. Carbon Material Optimized Biocathode for Improving Microbial Fuel Cell Performance.

    PubMed

    Tursun, Hairti; Liu, Rui; Li, Jing; Abro, Rashid; Wang, Xiaohui; Gao, Yanmei; Li, Yuan

    2016-01-01

    To improve the performance of microbial fuel cells (MFCs), the biocathode electrode material of double-chamber was optimized. Alongside the basic carbon fiber brush, three carbon materials namely graphite granules, activated carbon granules (ACG) and activated carbon powder, were added to the cathode-chambers to improve power generation. The result shows that the addition of carbon materials increased the amount of available electroactive microbes on the electrode surface and thus promote oxygen reduction rate, which improved the generation performance of the MFCs. The Output current (external resistance = 1000 Ω) greatly increased after addition of the three carbon materials and maximum power densities in current stable phase increased by 47.4, 166.1, and 33.5%, respectively. Additionally, coulombic efficiencies of the MFC increased by 16.3, 64.3, and 20.1%, respectively. These results show that MFC when optimized with ACG show better power generation, higher chemical oxygen demands removal rate and coulombic efficiency. PMID:26858695

  20. Multiband RF pulses with improved performance via convex optimization.

    PubMed

    Shang, Hong; Larson, Peder E Z; Kerr, Adam; Reed, Galen; Sukumar, Subramaniam; Elkhaled, Adam; Gordon, Jeremy W; Ohliger, Michael A; Pauly, John M; Lustig, Michael; Vigneron, Daniel B

    2016-01-01

    Selective RF pulses are commonly designed with the desired profile as a low pass filter frequency response. However, for many MRI and NMR applications, the spectrum is sparse with signals existing at a few discrete resonant frequencies. By specifying a multiband profile and releasing the constraint on "don't-care" regions, the RF pulse performance can be improved to enable a shorter duration, sharper transition, or lower peak B1 amplitude. In this project, a framework for designing multiband RF pulses with improved performance was developed based on the Shinnar-Le Roux (SLR) algorithm and convex optimization. It can create several types of RF pulses with multiband magnitude profiles, arbitrary phase profiles and generalized flip angles. The advantage of this framework with a convex optimization approach is the flexible trade-off of different pulse characteristics. Designs for specialized selective RF pulses for balanced SSFP hyperpolarized (HP) (13)C MRI, a dualband saturation RF pulse for (1)H MR spectroscopy, and a pre-saturation pulse for HP (13)C study were developed and tested. PMID:26754063

  1. Perform - A performance optimizing computer program for dynamic systems subject to transient loadings

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Wang, B. P.; Yoo, Y.; Clark, B.

    1973-01-01

    A description and applications of a computer capability for determining the ultimate optimal behavior of a dynamically loaded structural-mechanical system are presented. This capability provides characteristics of the theoretically best, or limiting, design concept according to response criteria dictated by design requirements. Equations of motion of the system in first or second order form include incompletely specified elements whose characteristics are determined in the optimization of one or more performance indices subject to the response criteria in the form of constraints. The system is subject to deterministic transient inputs, and the computer capability is designed to operate with a large linear programming on-the-shelf software package which performs the desired optimization. The report contains user-oriented program documentation in engineering, problem-oriented form. Applications cover a wide variety of dynamics problems including those associated with such diverse configurations as a missile-silo system, impacting freight cars, and an aircraft ride control system.

  2. Adaptive Measurement of Well-Being: Maximizing Efficiency and Optimizing User Experience during Individual Assessment.

    PubMed

    Kraatz, Miriam; Sears, Lindsay E; Coberley, Carter R; Pope, James E

    2016-08-01

    Well-being is linked to important societal factors such as health care costs and productivity and has experienced a surge in development activity of both theories and measurement. This study builds on validation of the Well-Being 5 survey and for the first time applies Item Response Theory, a modern and flexible measurement paradigm, to form the basis of adaptive population well-being measurement. Adaptive testing allows survey questions to be administered selectively, thereby reducing the number of questions required of the participant. After the graded response model was fit to a sample of size N = 12,035, theta scores were estimated based on both the full-item bank and a simulation of Computerized Adaptive Testing (CAT). Comparisons of these 2 sets of score estimates with each other and of their correlations with external outcomes of job performance, absenteeism, and hospital admissions demonstrate that the CAT well-being scores maintain accuracy and validity. The simulation indicates that the average survey taker can expect a reduction in number of items administered during the CAT process of almost 50%. An increase in efficiency of this extent is of considerable value because of the time savings during the administration of the survey and the potential improvement of user experience, which in turn can help secure the success of a total population-based well-being improvement program. (Population Health Management 2016;19:284-290). PMID:26674396

  3. Adaptive Measurement of Well-Being: Maximizing Efficiency and Optimizing User Experience during Individual Assessment

    PubMed Central

    Kraatz, Miriam; Coberley, Carter R.; Pope, James E.

    2016-01-01

    Abstract Well-being is linked to important societal factors such as health care costs and productivity and has experienced a surge in development activity of both theories and measurement. This study builds on validation of the Well-Being 5 survey and for the first time applies Item Response Theory, a modern and flexible measurement paradigm, to form the basis of adaptive population well-being measurement. Adaptive testing allows survey questions to be administered selectively, thereby reducing the number of questions required of the participant. After the graded response model was fit to a sample of size N = 12,035, theta scores were estimated based on both the full-item bank and a simulation of Computerized Adaptive Testing (CAT). Comparisons of these 2 sets of score estimates with each other and of their correlations with external outcomes of job performance, absenteeism, and hospital admissions demonstrate that the CAT well-being scores maintain accuracy and validity. The simulation indicates that the average survey taker can expect a reduction in number of items administered during the CAT process of almost 50%. An increase in efficiency of this extent is of considerable value because of the time savings during the administration of the survey and the potential improvement of user experience, which in turn can help secure the success of a total population-based well-being improvement program. (Population Health Management 2016;19:284–290) PMID:26674396

  4. Performance Optimization of NEMO Oceanic Model at High Resolution

    NASA Astrophysics Data System (ADS)

    Epicoco, Italo; Mocavero, Silvia; Aloisio, Giovanni

    2014-05-01

    The NEMO oceanic model is based on the Navier-Stokes equations along with a nonlinear equation of state, which couples the two active tracers (temperature and salinity) to the fluid velocity. The code is written in Fortan 90 and parallelized using MPI. The resolution of the global ocean models used today for climate change studies limits the prediction accuracy. To overcome this limit, a new high-resolution global model, based on NEMO, simulating at 1/16° and 100 vertical levels has been developed at CMCC. The model is computational and memory intensive, so it requires many resources to be run. An optimization activity is needed. The strategy requires a preliminary analysis to highlight scalability bottlenecks. It has been performed on a SandyBridge architecture at CMCC. An efficiency of 48% on 7K cores (the maximum available) has been achieved. The analysis has been also carried out at routine level, so that the improvement actions could be designed for the entire code or for the single kernel. The analysis highlighted for example a loss of performance due to the routine used to implement the north fold algorithm (i.e. handling the points at the north pole of the 3-poles Grids): indeed an optimization of the routine implementation is needed. The folding is achieved considering only the last 4 rows on the top of the global domain and by applying a rotation pivoting on the point in the middle. During the folding, the point on the top left is updated with the value of the point on bottom right and so on. The current version of the parallel algorithm is based on the domain decomposition. Each MPI process takes care of a block of points. Each process can update its points using values belonging to the symmetric process. In the current implementation, each received message is placed in a buffer with a number of elements equal to the total dimension of the global domain. Each process sweeps the entire buffer, but only a part of that computation is really useful for the

  5. Importance of eccentric actions in performance adaptations to resistance training

    NASA Technical Reports Server (NTRS)

    Dudley, Gary A.; Miller, Bruce J.; Buchanan, Paul; Tesch, Per A.

    1991-01-01

    The importance of eccentric (ecc) muscle actions in resistance training for the maintenance of muscle strength and mass in hypogravity was investigated in experiments in which human subjects, divided into three groups, were asked to perform four-five sets of 6 to 12 repetitions (rep) per set of three leg press and leg extension exercises, 2 days each weeks for 19 weeks. One group, labeled 'con', performed each rep with only concentric (con) actions, while group con/ecc with performed each rep with only ecc actions; the third group, con/con, performed twice as many sets with only con actions. Control subjects did not train. It was found that resistance training wih both con and ecc actions induced greater increases in muscle strength than did training with only con actions.

  6. Monte Carlo modelling of multiconjugate adaptive optics performance on the European Extremely Large Telescope

    NASA Astrophysics Data System (ADS)

    Basden, A. G.

    2015-11-01

    The performance of a wide-field adaptive optics system depends on input design parameters. Here we investigate the performance of a multiconjugate adaptive optics system design for the European Extremely Large Telescope, using an end-to-end Monte Carlo adaptive optics simulation tool, DASP (Durham adaptive optics simulation platform). We consider parameters such as the number of laser guide stars, sodium layer depth, wavefront sensor pixel scale, number of deformable mirrors (DMs), mirror conjugation and actuator pitch. We provide potential areas where costs savings can be made, and investigate trade-offs between performance and cost. We conclude that a six-laser guide star system using three DMs seems to be a sweet spot for performance and cost compromise.

  7. Adaptive and optimal detection of elastic object scattering with single-channel monostatic iterative time reversal

    NASA Astrophysics Data System (ADS)

    Ying, Ying-Zi; Ma, Li; Guo, Sheng-Ming

    2011-05-01

    In active sonar operation, the presence of background reverberation and the low signal-to-noise ratio hinder the detection of targets. This paper investigates the application of single-channel monostatic iterative time reversal to mitigate the difficulties by exploiting the resonances of the target. Theoretical analysis indicates that the iterative process will adaptively lead echoes to converge to a narrowband signal corresponding to a scattering object's dominant resonance mode, thus optimising the return level. The experiments in detection of targets in free field and near a planar interface have been performed. The results illustrate the feasibility of the method.

  8. Economic performance of irrigation capacity development to adapt to climate in the American Southwest

    NASA Astrophysics Data System (ADS)

    Ward, Frank A.; Crawford, Terry L.

    2016-09-01

    Growing demands for food security to feed increasing populations worldwide have intensified the search for improved performance of irrigation, the world's largest water user. These challenges are raised in the face of climate variability and from growing environmental demands. Adaptation measures in irrigated agriculture include fallowing land, shifting cropping patterns, increased groundwater pumping, reservoir storage capacity expansion, and increased production of risk-averse crops. Water users in the Gila Basin headwaters of the U.S. Lower Colorado Basin have faced a long history of high water supply fluctuations producing low-valued defensive cropping patterns. To date, little research grade analysis has investigated economically viable measures for irrigation development to adjust to variable climate. This gap has made it hard to inform water resource policy decisions on workable measures to adapt to climate in the world's dry rural areas. This paper's contribution is to illustrate, formulate, develop, and apply a new methodology to examine the economic performance from irrigation capacity improvements in the Gila Basin of the American Southwest. An integrated empirical optimization model using mathematical programming is developed to forecast cropping patterns and farm income under two scenarios (1) status quo without added storage capacity and (2) with added storage capacity in which existing barriers to development of higher valued crops are dissolved. We find that storage capacity development can lead to a higher valued portfolio of irrigation production systems as well as more sustained and higher valued farm livelihoods. Results show that compared to scenario (1), scenario (2) increases regional farm income by 30%, in which some sub regions secure income gains exceeding 900% compared to base levels. Additional storage is most economically productive when institutional and technical constraints facing irrigated agriculture are dissolved. Along with

  9. Welding Adaptive Functions Performed Through Infrared (IR) Simplified Vision Schemes

    NASA Astrophysics Data System (ADS)

    Begin, Ghlslain; Boillot, Jean-Paul

    1984-02-01

    An ideal integrated robotic welding system should incorporate off-line programmation with the possibility of real time modifications of a given welding programme. Off-line programmation makes possible the optimization of the various sequences of a programme by simulation and therefore promotes increased welding station duty cycle. Real time modifications of a given programme, generated either by an off-line programmation scheme or by a learn mode on a first piece of a series, are essential because on many occasions, the cumulative dimensional tolerances and the distorsions associated with the process, build up a misfit beetween the programmed welding path and the real joint to be welded, to the extent that welding defects occur.

  10. Autonomous Propulsion System Technology Being Developed to Optimize Engine Performance Throughout the Lifecycle

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2004-01-01

    The goal of the Autonomous Propulsion System Technology (APST) project is to reduce pilot workload under both normal and anomalous conditions. Ongoing work under APST develops and leverages technologies that provide autonomous engine monitoring, diagnosing, and controller adaptation functions, resulting in an integrated suite of algorithms that maintain the propulsion system's performance and safety throughout its life. Engine-to-engine performance variation occurs among new engines because of manufacturing tolerances and assembly practices. As an engine wears, the performance changes as operability limits are reached. In addition to these normal phenomena, other unanticipated events such as sensor failures, bird ingestion, or component faults may occur, affecting pilot workload as well as compromising safety. APST will adapt the controller as necessary to achieve optimal performance for a normal aging engine, and the safety net of APST algorithms will examine and interpret data from a variety of onboard sources to detect, isolate, and if possible, accommodate faults. Situations that cannot be accommodated within the faulted engine itself will be referred to a higher level vehicle management system. This system will have the authority to redistribute the faulted engine's functionality among other engines, or to replan the mission based on this new engine health information. Work is currently underway in the areas of adaptive control to compensate for engine degradation due to aging, data fusion for diagnostics and prognostics of specific sensor and component faults, and foreign object ingestion detection. In addition, a framework is being defined for integrating all the components of APST into a unified system. A multivariable, adaptive, multimode control algorithm has been developed that accommodates degradation-induced thrust disturbances during throttle transients. The baseline controller of the engine model currently being investigated has multiple control

  11. A Phase I/II adaptive design to determine the optimal treatment regimen from a set of combination immunotherapies in high-risk melanoma.

    PubMed

    Wages, Nolan A; Slingluff, Craig L; Petroni, Gina R

    2015-03-01

    In oncology, vaccine-based immunotherapy often investigates regimens that demonstrate minimal toxicity overall and higher doses may not correlate with greater immune response. Rather than determining the maximum tolerated dose, the goal of the study becomes locating the optimal biological dose, which is defined as a safe dose demonstrating the greatest immunogenicity, based on some predefined measure of immune response. Incorporation of adjuvants, new or optimized peptide vaccines, and combining vaccines with immune modulators may enhance immune response, with the aim of improving clinical response. Innovative dose escalation strategies are needed to establish the safety and immunogenicity of new immunologic combinations. We describe the implementation of an adaptive design for identifying the optimal treatment strategy in a multi-site, FDA-approved, phase I/II trial of a novel vaccination approach using long-peptides plus TLR agonists for resected stage IIB-IV melanoma. Operating characteristics of the design are demonstrated under various possible true scenarios via simulation studies. Overall performance indicates that the design is a practical Phase I/II adaptive method for use with combined immunotherapy agents. The simulation results demonstrate the method's ability to effectively recommend optimal regimens in a high percentage of trials with manageable sample sizes. The numerical results presented in this work include the type of simulation information that aid review boards in understanding design performance, such as average sample size and frequency of early trial termination, which we hope will augment early-phase trial design in cancer immunotherapy. PMID:25638752

  12. Optimizing performance by improving core stability and core strength.

    PubMed

    Hibbs, Angela E; Thompson, Kevin G; French, Duncan; Wrigley, Allan; Spears, Iain

    2008-01-01

    Core stability and core strength have been subject to research since the early 1980s. Research has highlighted benefits of training these processes for people with back pain and for carrying out everyday activities. However, less research has been performed on the benefits of core training for elite athletes and how this training should be carried out to optimize sporting performance. Many elite athletes undertake core stability and core strength training as part of their training programme, despite contradictory findings and conclusions as to their efficacy. This is mainly due to the lack of a gold standard method for measuring core stability and strength when performing everyday tasks and sporting movements. A further confounding factor is that because of the differing demands on the core musculature during everyday activities (low load, slow movements) and sporting activities (high load, resisted, dynamic movements), research performed in the rehabilitation sector cannot be applied to the sporting environment and, subsequently, data regarding core training programmes and their effectiveness on sporting performance are lacking. There are many articles in the literature that promote core training programmes and exercises for performance enhancement without providing a strong scientific rationale of their effectiveness, especially in the sporting sector. In the rehabilitation sector, improvements in lower back injuries have been reported by improving core stability. Few studies have observed any performance enhancement in sporting activities despite observing improvements in core stability and core strength following a core training programme. A clearer understanding of the roles that specific muscles have during core stability and core strength exercises would enable more functional training programmes to be implemented, which may result in a more effective transfer of these skills to actual sporting activities. PMID:19026017

  13. The Influence of Pump-and-Treat Problem Formulation on the Performance of a Hybrid Global-Local Optimizer

    NASA Astrophysics Data System (ADS)

    Matott, L. S.; Gray, G. A.

    2011-12-01

    Pump-and-treat systems are a common strategy for groundwater remediation, wherein a system of extraction wells is installed at an affected site to address pollutant migration. In this context, the likely performance of candidate remedial systems is often assessed using groundwater flow modeling. When linked with an optimizer, these models can be utilized to identify a least-cost system design that nonetheless satisfies remediation goals. Moreover, the resulting design problems serve as important tools in the development and testing of optimization algorithms. For example, consider EAGLS (Evolutionary Algorithm Guiding Local Search), a recently developed derivative-free simulation-optimization code that seeks to efficiently solve nonlinear problems by hybridizing local and global search techniques. The EAGLS package was designed to specifically target mixed variable problems and has a limited ability to intelligently adapt its behavior to given problem characteristics. For instance, to solve problems in which there are no discrete or integer variables, the EAGLS code defaults to a multi-start asynchronous parallel pattern search. Therefore, to better understand the behavior of EAGLS, the algorithm was applied to a representative dual-plume pump-and-treat containment problem. A series of numerical experiments were performed involving four different formulations of the underlying pump-and-treat optimization problem, namely: (1) optimization of pumping rates, given fixed number of wells at fixed locations; (2) optimization of pumping rates and locations of a fixed number of wells; (3) optimization of pumping rates and number of wells at fixed locations; and (4) optimization of pumping rates, locations, and number of wells. Comparison of the performance of the EAGLS software with alternative search algorithms across different problem formulations yielded new insights for improving the EAGLS algorithm and enhancing its adaptive behavior.

  14. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  15. SAXO: the extreme adaptive optics system of SPHERE (I) system overview and global laboratory performance

    NASA Astrophysics Data System (ADS)

    Sauvage, Jean-Francois; Fusco, Thierry; Petit, Cyril; Costille, Anne; Mouillet, David; Beuzit, Jean-Luc; Dohlen, Kjetil; Kasper, Markus; Suarez, Marcos; Soenke, Christian; Baruffolo, Andrea; Salasnich, Bernardo; Rochat, Sylvain; Fedrigo, Enrico; Baudoz, Pierre; Hugot, Emmanuel; Sevin, Arnaud; Perret, Denis; Wildi, Francois; Downing, Mark; Feautrier, Philippe; Puget, Pascal; Vigan, Arthur; O'Neal, Jared; Girard, Julien; Mawet, Dimitri; Schmid, Hans Martin; Roelfsema, Ronald

    2016-04-01

    The direct imaging of exoplanet is a leading field of today's astronomy. The photons coming from the planet carry precious information on the chemical composition of its atmosphere. The second-generation instrument, Spectro-Polarimetric High contrast Exoplanet Research (SPHERE), dedicated to detection, photometry and spectral characterization of Jovian-like planets, is now in operation on the European very large telescope. This instrument relies on an extreme adaptive optics (XAO) system to compensate for atmospheric turbulence as well as for internal errors with an unprecedented accuracy. We demonstrate the high level of performance reached by the SPHERE XAO system (SAXO) during the assembly integration and test (AIT) period. In order to fully characterize the instrument quality, two AIT periods have been mandatory. In the first phase at Observatoire de Paris, the performance of SAXO itself was assessed. In the second phase at IPAG Grenoble Observatory, the operation of SAXO in interaction with the overall instrument has been optimized. In addition to the first two phases, a final check has been performed after the reintegration of the instrument at Paranal Observatory, in the New Integration Hall before integration at the telescope focus. The final performance aimed by the SPHERE instrument with the help of SAXO is among the highest Strehl ratio pretended for an operational instrument (90% in H band, 43% in V band in a realistic turbulence r0, and wind speed condition), a limit R magnitude for loop closure at 15, and a robustness to high wind speeds. The full-width at half-maximum reached by the instrument is 40 mas for infrared in H band and unprecedented 18.5 mas in V band.

  16. Improving BCI performance through co-adaptation: applications to the P300-speller.

    PubMed

    Mattout, Jérémie; Perrin, Margaux; Bertrand, Olivier; Maby, Emmanuel

    2015-02-01

    A well-known neurophysiological marker that can easily be captured with electroencephalography (EEG) is the so-called P300: a positive signal deflection occurring at about 300 ms after a relevant stimulus. This brain response is particularly salient when the target stimulus is rare among a series of distracting stimuli, whatever the type of sensory input. Therefore, it has been proposed and extensively studied as a possible feature for direct brain-computer communication. The most advanced non-invasive BCI application based on this principle is the P300-speller. However, it is still a matter of debate whether this application will prove relevant to any population of patients. In a series of recent theoretical and empirical studies, we have been using this P300-based paradigm to push forward the performance of non-invasive BCI. This paper summarizes the proposed improvements and obtained results. Importantly, those could be generalized to many kinds of BCI, beyond this particular application. Indeed, they relate to most of the key components of a closed-loop BCI, namely: improving the accuracy of the system by trying to detect and correct for errors automatically; optimizing the computer's speed-accuracy trade-off by endowing it with adaptive behavior; but also simplifying the hardware and time for set-up in the aim of routine use in patients. Our results emphasize the importance of the closed-loop interaction and of the ensuing co-adaptation between the user and the machine whenever possible. Most of our evaluations have been conducted in healthy subjects. We conclude with perspectives for clinical applications. PMID:25623293

  17. Physical Protection System Upgrades - Optimizing for Performance and Cost

    SciTech Connect

    Bouchard, Ann M.; Hicks, Mary Jane

    1999-07-09

    CPA--Cost and Performance Analysis--is an architecture that supports analysis of physical protection systems and upgrade options. ASSESS (Analytic System and Software for Evaluating Security Systems), a tool for evaluating performance of physical protection systems, currently forms the cornerstone for evaluating detection probabilities and delay times of the system. Cost and performance data are offered to the decision-maker at the systems level and to technologists at the path-element level. A new optimization engine has been attached to the CPA methodology to automate analyses of many combinations (portfolios) of technologies. That engine controls a new analysis sequencer that automatically modifies ASSESS PPS files (facility descriptions), automatically invokes ASSESS Outsider analysis and then saves results for post-processing. Users can constrain the search to an upper bound on total cost, to a lower bound on level of performance, or to include specific technologies or technology types. This process has been applied to a set of technology development proposals to identify those portfolios that provide the most improvement in physical security for the lowest cost to install, operate and maintain at a baseline facility.

  18. Low-cost high performance adaptive optics real-time controller in free space optical communication system

    NASA Astrophysics Data System (ADS)

    Chen, Shanqiu; Liu, Chao; Zhao, Enyi; Xian, Hao; Xu, Bing; Ye, Yutang

    2014-11-01

    This paper proposed a low-cost and high performance adaptive optics real-time controller in free space optical communication system. Real-time controller is constructed with a 4-core CPU with Linux operation system patched with Real-Time Application Interface (RTAI) and a frame-grabber, and the whole cost is below $6000. Multi-core parallel processing scheme and SSE instruction optimization for reconstruction process result in about 5 speedup, and overall processing time for this 137-element adaptive optic system can reach below 100 us and with latency about 50 us by utilizing streamlined processing scheme, which meet the requirement of processing at frequency over 1709 Hz. Real-time data storage system designed by circle buffer make this system to store consecutive image frames and provide an approach to analysis the image data and intermediate data such as slope information.

  19. An adaptive strategy on the error of the objective functions for uncertainty-based derivative-free optimization

    NASA Astrophysics Data System (ADS)

    Fusi, F.; Congedo, P. M.

    2016-03-01

    In this work, a strategy is developed to deal with the error affecting the objective functions in uncertainty-based optimization. We refer to the problems where the objective functions are the statistics of a quantity of interest computed by an uncertainty quantification technique that propagates some uncertainties of the input variables through the system under consideration. In real problems, the statistics are computed by a numerical method and therefore they are affected by a certain level of error, depending on the chosen accuracy. The errors on the objective function can be interpreted with the abstraction of a bounding box around the nominal estimation in the objective functions space. In addition, in some cases the uncertainty quantification methods providing the objective functions also supply the possibility of adaptive refinement to reduce the error bounding box. The novel method relies on the exchange of information between the outer loop based on the optimization algorithm and the inner uncertainty quantification loop. In particular, in the inner uncertainty quantification loop, a control is performed to decide whether a refinement of the bounding box for the current design is appropriate or not. In single-objective problems, the current bounding box is compared to the current optimal design. In multi-objective problems, the decision is based on the comparison of the error bounding box of the current design and the current Pareto front. With this strategy, fewer computations are made for clearly dominated solutions and an accurate estimate of the objective function is provided for the interesting, non-dominated solutions. The results presented in this work prove that the proposed method improves the efficiency of the global loop, while preserving the accuracy of the final Pareto front.

  20. The Adapted Dance Process: Planning, Partnering, and Performing

    ERIC Educational Resources Information Center

    Block, Betty A.; Johnson, Peggy V.

    2011-01-01

    This article contains specific planning, partnering, and performing techniques for fully integrating dancers with special needs into a dance pedagogy program. Each aspect is discussed within the context of the domains of learning. Fundamental partnering strategies are related to each domain as part of the integration process. The authors recommend…

  1. Teachers Adapt Their Instruction According to Students' Academic Performance

    ERIC Educational Resources Information Center

    Nurmi, Jari-Erik; Viljaranta, Jaana; Tolvanen, Asko; Aunola, Kaisa

    2012-01-01

    This study examined the extent to which a student's academic performance in first grade contributes to the active instruction given by a teacher to a particular student. To investigate this, 105 first graders were tested in mathematics and reading in the fall and spring of their first school year. At the same time points, their teachers filled in…

  2. High performance dosimetry calculations using adapted ray-tracing

    NASA Astrophysics Data System (ADS)

    Perrotte, Lancelot; Saupin, Guillaume

    2010-11-01

    When preparing interventions on nuclear sites, it is interesting to study different scenarios, to identify the most appropriate one for the operator(s). Using virtual reality tools is a good way to simulate the potential scenarios. Thus, taking advantage of very efficient computation times can help the user studying different complex scenarios, by immediately evaluating the impact of any changes. In the field of radiation protection, people often use computation codes based on the straight line attenuation method with build-up factors. As for other approaches, geometrical computations (finding all the interactions between radiation rays and the scene objects) remain the bottleneck of the simulation. We present in this paper several optimizations used to speed up these geometrical computations, using innovative GPU ray-tracing algorithms. For instance, we manage to compute every intersectionbetween 600 000 rays and a huge 3D industrial scene in a fraction of second. Moreover, our algorithm works the same way for both static and dynamic scenes, allowing easier study of complex intervention scenarios (where everything moves: the operator(s), the shielding objects, the radiation sources).

  3. Optimizing small wind turbine performance in battery charging applications

    NASA Astrophysics Data System (ADS)

    Drouilhet, Stephen; Muljadi, Eduard; Holz, Richard; Gevorgian, Vahan

    1995-05-01

    Many small wind turbine generators (10 kW or less) consist of a variable speed rotor driving a permanent magnet synchronous generator (alternator). One application of such wind turbines is battery charging, in which the generator is connected through a rectifier to a battery bank. The wind turbine electrical interface is essentially the same whether the turbine is part of a remote power supply for telecommunications, a standalone residential power system, or a hybrid village power system, in short, any system in which the wind generator output is rectified and fed into a DC bus. Field experience with such applications has shown that both the peak power output and the total energy capture of the wind turbine often fall short of expectations based on rotor size and generator rating. In this paper, the authors present a simple analytical model of the typical wind generator battery charging system that allows one to calculate actual power curves if the generator and rotor properties are known. The model clearly illustrates how the load characteristics affect the generator output. In the second part of this paper, the authors present four approaches to maximizing energy capture from wind turbines in battery charging applications. The first of these is to determine the optimal battery bank voltage for a given WTG. The second consists of adding capacitors in series with the generator. The third approach is to place an optimizing DC/DC voltage converter between the rectifier and the battery bank. The fourth is a combination of the series capacitors and the optimizing voltage controller. They also discuss both the limitations and the potential performance gain associated with each of the four configurations.

  4. Aircraft design for mission performance using nonlinear multiobjective optimization methods

    NASA Technical Reports Server (NTRS)

    Dovi, Augustine R.; Wrenn, Gregory A.

    1990-01-01

    A new technique which converts a constrained optimization problem to an unconstrained one where conflicting figures of merit may be simultaneously considered was combined with a complex mission analysis system. The method is compared with existing single and multiobjective optimization methods. A primary benefit from this new method for multiobjective optimization is the elimination of separate optimizations for each objective, which is required by some optimization methods. A typical wide body transport aircraft is used for the comparative studies.

  5. Capsule performance optimization in the National Ignition Campaign

    SciTech Connect

    Landen, O. L.; Bradley, D. K.; Braun, D. G.; Callahan, D. A.; Celliers, P. M.; Collins, G. W.; Dewald, E. L.; Divol, L.; Glenzer, S. H.; Hamza, A.; Hicks, D. G.; Izumi, N.; Jones, O. S.; Kirkwood, R. K.; Michel, P.; Milovich, J.; Munro, D. H.; Robey, H. F.; Spears, B. K.; Thomas, C. A.

    2010-05-15

    A capsule performance optimization campaign will be conducted at the National Ignition Facility [G. H. Miller et al., Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition by laser-driven hohlraums [J. D. Lindl et al., Phys. Plasmas 11, 339 (2004)]. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the OMEGA facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.

  6. Capsule Performance Optimization in the National Ignition Campaign

    SciTech Connect

    Landen, O L; MacGowan, B J; Haan, S W; Edwards, J

    2009-10-13

    A capsule performance optimization campaign will be conducted at the National Ignition Facility to substantially increase the probability of ignition. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.

  7. The PMAS Fiber Module: Design, Manufacture and Performance Optimization

    NASA Astrophysics Data System (ADS)

    Kelz, Andreas; Roth, Martin M.; Becker, Thomas; Bauer, Svend-Marian

    2003-02-01

    PMAS, the Potsdam Multi-Aperture Spectrophotometer, is a new integral field (IF or 3D) instrument. It features a lenslet/optical fiber type integral field module and a dedicated fiber spectrograph. As the instrumental emphasis is on photometric stability and high efficiency, good flat field characteristic across the integral field is needed. The PMAS fiber module is unique in the sense that the design allows the replacement of individual fibers. This property, together with the fact that the fibers are index-matched at both ends, makes it possible to achieve and maintain a high efficiency. We present the opto-mechanical design for this fiber-module and, using various data sets from previous observing runs, demonstrate the increase of performance as a result of the optimization of the fiber-components.

  8. Capsule performance optimization in the national ignition campaign

    NASA Astrophysics Data System (ADS)

    Landen, O. L.; MacGowan, B. J.; Haan, S. W.; Edwards, J.

    2010-08-01

    A capsule performance optimization campaign will be conducted at the National Ignition Facility [1] to substantially increase the probability of ignition. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.

  9. Capsule performance optimization in the National Ignition Campaigna)

    NASA Astrophysics Data System (ADS)

    Landen, O. L.; Boehly, T. R.; Bradley, D. K.; Braun, D. G.; Callahan, D. A.; Celliers, P. M.; Collins, G. W.; Dewald, E. L.; Divol, L.; Glenzer, S. H.; Hamza, A.; Hicks, D. G.; Hoffman, N.; Izumi, N.; Jones, O. S.; Kirkwood, R. K.; Kyrala, G. A.; Michel, P.; Milovich, J.; Munro, D. H.; Nikroo, A.; Olson, R. E.; Robey, H. F.; Spears, B. K.; Thomas, C. A.; Weber, S. V.; Wilson, D. C.; Marinak, M. M.; Suter, L. J.; Hammel, B. A.; Meyerhofer, D. D.; Atherton, J.; Edwards, J.; Haan, S. W.; Lindl, J. D.; MacGowan, B. J.; Moses, E. I.

    2010-05-01

    A capsule performance optimization campaign will be conducted at the National Ignition Facility [G. H. Miller et al., Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition by laser-driven hohlraums [J. D. Lindl et al., Phys. Plasmas 11, 339 (2004)]. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the OMEGA facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.

  10. Performance optimization of a gas turbine-based cogeneration system

    NASA Astrophysics Data System (ADS)

    Yilmaz, Tamer

    2006-06-01

    In this paper an exergy optimization has been carried out for a cogeneration plant consisting of a gas turbine, which is operated in a Brayton cycle, and a heat recovery steam generator (HRSG). In the analysis, objective functions of the total produced exergy and exergy efficiency have been defined as functions of the design parameters of the gas turbine and the HRSG. An equivalent temperature is defined as a new approach to model the exergy rate of heat transfer from the HRSG. The optimum design parameters of the cogeneration cycle at maximum exergy are determined and the effects of these parameters on exergetic performance are investigated. Some practical mathematical relations are also derived to find the optimum values of the adiabatic temperature ratio for given extreme temperatures and consumer temperature.

  11. Medical Device Risk Management For Performance Assurance Optimization and Prioritization.

    PubMed

    Gaamangwe, Tidimogo; Babbar, Vishvek; Krivoy, Agustina; Moore, Michael; Kresta, Petr

    2015-01-01

    Performance assurance (PA) is an integral component of clinical engineering medical device risk management. For that reason, the clinical engineering (CE) community has made concerted efforts to define appropriate risk factors and develop quantitative risk models for efficient data processing and improved PA program operational decision making. However, a common framework that relates the various processes of a quantitative risk system does not exist. This article provides a perspective that focuses on medical device quality and risk-based elements of the PA program, which include device inclusion/exclusion, schedule optimization, and inspection prioritization. A PA risk management framework is provided, and previous quantitative models that have contributed to the advancement of PA risk management are examined. A general model for quantitative risk systems is proposed, and further perspective on possible future directions in the area of PA technology is also provided. PMID:26618842

  12. Optimized performance of solar powered variable speed induction motor drive

    SciTech Connect

    Singh, B.N.; Singh, B.P.; Singh, B.; Chandra, A.; Al-Haddad, K.

    1995-12-31

    This paper deals with the design and development of a photo voltaic (PV) array fed cage induction motor for an isolated water pumping system. A drive system using a chopper circuit to track maximum power from the PV for different solar insolation and a current controlled voltage source inverter (CC-VSI) to optimally match the motor to PV characteristics is presented. The model equations governing interaction of torque and flux producing components of motor current with available solar power is developed for the operation of the system at optimum efficiency. Performance of the system is presented for different realistic operating conditions, which demonstrates its special features for applications such as solar water pumping system, solar vehicles and floor mills located in hilly and isolated areas.

  13. Whole cell electrochemistry of electricity-producing microorganisms evidence an adaptation for optimal exocellular electron transport.

    PubMed

    Busalmen, Juan Pablo; Esteve-Nuñez, Abraham; Feliu, Juan Miguel

    2008-04-01

    The mechanism(s) by which electricity-producing microorganisms interact with an electrode is poorly understood. Outer membrane cytochromes and conductive pili are being considered as possible players, but the available information does not concur to a consensus mechanism yet. In this work we demonstrate that Geobacter sulfurreducens cells are able to change the way in which they exchange electrons with an electrode as a response to changes in the applied electrode potential. After several hours of polarization at 0.1 V Ag/AgCl-KCl (saturated), the voltammetric signature of the attached cells showed a single redox pair with a formal redox potential of about -0.08 V as calculated from chronopotentiometric analysis. A similar signal was obtained from cells adapted to 0.4 V. However, new redox couples were detected after conditioning at 0.6 V. A large oxidation process beyond 0.5 V transferring a higher current than that obtained at 0.1 V was found to be associated with two reduction waves at 0.23 and 0.50 V. The apparent equilibrium potential of these new processes was estimated to be at about 0.48 V from programmed current potentiometric results. Importantly, when polarization was lowered again to 0.1 V for 18 additional hours, the signals obtained at 0.6 V were found to greatly diminish in amplitude, whereas those previously found at the lower conditioning potential were recovered. Results clearly show the reversibility of cell adaptation to the electrode potential and pointto the polarization potential as a key variable to optimize energy production from an electricity producing population. PMID:18504979

  14. Optimization of MCAO performances: experimental results on ONERA laboratory MCAO bench

    NASA Astrophysics Data System (ADS)

    Costille, Anne; Petit, Cyril; Conan, Jean-Marc; Fusco, Thierry; Kulcsár, Caroline; Raynaud, Henri-François

    2008-07-01

    Classic Adaptive Optics (AO) is now a proven technique to correct turbulence on earth based astronomical telescopes. The corrected field of view is however limited by the anisoplanatism effect. Multi-Conjugate AO (MCAO) aims at providing a wide field of view correction through the use of several deformable mirrors and of multi-guide-star wavefront sensing. However the performance optimization of such complex systems raises new questions in terms of calibration and control. We present our current developments on performance optimization of MCAO systems. We show that performance can be significantly improved with tomographic control based on Linear Quadratic Gaussian control, compared with more standard methods. An experimental demonstration of this new approach is going to be implemented on HOMER, the recent bench developed at ONERA devoted to MCAO laboratory research. We present here results in closed-loop in AO, GLAO and MCAO with an integrator control. This bench implements two deformable mirrors and a wide field Shack-Hartman wavefront sensor.

  15. Performance analysis & optimization of well production in unconventional resource plays

    NASA Astrophysics Data System (ADS)

    Sehbi, Baljit Singh

    The Unconventional Resource Plays consisting of the lowest tier of resources (large volumes and most difficult to develop) have been the main focus of US domestic activity during recent times. Horizontal well drilling and hydraulic fracturing completion technology have been primarily responsible for this paradigm shift. The concept of drainage volume is being examined using pressure diffusion along streamlines. We use diffusive time of flight to optimize the number of hydraulic fracture stages in horizontal well application for Tight Gas reservoirs. Numerous field case histories are available in literature for optimizing number of hydraulic fracture stages, although the conclusions are case specific. In contrast, a general method is being presented that can be used to augment field experiments necessary to optimize the number of hydraulic fracture stages. The optimization results for the tight gas example are in line with the results from economic analysis. The fluid flow simulation for Naturally Fractured Reservoirs (NFR) is performed by Dual-Permeability or Dual-Porosity formulations. Microseismic data from Barnett Shale well is used to characterize the hydraulic fracture geometry. Sensitivity analysis, uncertainty assessment, manual & computer assisted history matching are integrated to develop a comprehensive workflow for building reliable reservoir simulation models. We demonstrate that incorporating proper physics of flow is the first step in building reliable reservoir simulation models. Lack of proper physics often leads to unreasonable reservoir parameter estimates. The workflow demonstrates reduced non-uniqueness for the inverse history matching problem. The behavior of near-critical fluids in Liquid Rich Shale plays defies the production behavior observed in conventional reservoir systems. In conventional reservoirs an increased gas-oil ratio is observed as flowing bottom-hole pressure is less than the saturation pressure. The production behavior is

  16. Parallel performance optimizations on unstructured mesh-based simulations

    DOE PAGESBeta

    Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; Huck, Kevin; Hollingsworth, Jeffrey; Malony, Allen; Williams, Samuel; Oliker, Leonid

    2015-06-01

    This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches.more » We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less

  17. Calibration Modeling Methodology to Optimize Performance for Low Range Applications

    NASA Technical Reports Server (NTRS)

    McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.

    2010-01-01

    Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.

  18. Structuring an EPICS System to Optimize Reliability, Performance and Cost

    SciTech Connect

    Bickley, Matthew; White, Karen S

    2005-10-10

    Thomas Jefferson National Accelerator Facility (Jefferson Lab) uses EPICS as the basis for its control system, effectively operating a number of plants at the laboratory, including the Continuous Electron Beam Accelerator (CEBA), a Free Electron Laser (FEL), Central Helium Liquefier, and several ancillary systems. We now use over 200 distributed computers running over a complex segmented network with 350,000 EPICS records and 50,000 control points to support operation of two machines for three experimental halls, along with the supporting infrastructure. During the 10 years that EPICS has been in use we have made a number of design and implementation choices in the interest of optimizing control system reliability, performance, security and cost. At the highest level, the control system is divided into a number of distinct segments, each controlling a separate operational plant. This supports operational independence, and therefore reliability, and provides a more flexible environment for maintenance and support. The control system is relatively open, allowing any of the 300 account holders to look at data from any segment. However security and operational needs mandate restricted write access to the various control points based on each user's job function and the operational mode of the facility. Additionally, the large number of simultaneous users, coupled with the amount of available data, necessitates the use of throttling mechanisms such as a Nameserver, which effectively reduces broadcast traffic and improves application initialization performance. Our segmented approach provides natural boundaries for managing data flow and restricting access, using tools such as the EPICS Gateway and Channel Access Security. This architecture enables cost optimizations by allowing expensive resources, such as Oracle, to be effectively shared throughout the control system, while minimizing the impact of a failure in any single area. This paper discusses the various design

  19. Optimizing timing performance of silicon photomultiplier-based scintillation detectors

    PubMed Central

    Yeom, Jung Yeol; Vinke, Ruud

    2013-01-01

    Precise timing resolution is crucial for applications requiring photon time-of-flight (ToF) information such as ToF positron emission tomography (PET). Silicon photomultipliers (SiPM) for PET, with their high output capacitance, are known to require custom preamplifiers to optimize timing performance. In this paper, we describe simple alternative front-end electronics based on a commercial low-noise RF preamplifier and methods that have been implemented to achieve excellent timing resolution. Two radiation detectors with L(Y)SO scintillators coupled to Hamamatsu SiPMs (MPPC S10362–33-050C) and front-end electronics based on an RF amplifier (MAR-3SM+), typically used for wireless applications that require minimal additional circuitry, have been fabricated. These detectors were used to detect annihilation photons from a Ge-68 source and the output signals were subsequently digitized by a high speed oscilloscope for offline processing. A coincident resolving time (CRT) of 147 ± 3 ps FWHM and 186 ± 3 ps FWHM with 3 × 3 × 5 mm3 and with 3 × 3 × 20 mm3 LYSO crystal elements were measured, respectively. With smaller 2 × 2 × 3 mm3 LSO crystals, a CRT of 125 ± 2 ps FWHM was achieved with slight improvement to 121 ± 3 ps at a lower temperature (15°C). Finally, with the 20 mm length crystals, a degradation of timing resolution was observed for annihilation photon interactions that occur close to the photosensor compared to shallow depth-of-interaction (DOI). We conclude that commercial RF amplifiers optimized for noise, besides their ease of use, can produce excellent timing resolution comparable to best reported values acquired with custom readout electronics. On the other hand, as timing performance degrades with increasing photon DOI, a head-on detector configuration will produce better CRT than a side-irradiated setup for longer crystals. PMID:23369872

  20. Production of cold-adapted amylase by marine bacterium Wangia sp. C52: optimization, modeling, and partial characterization.

    PubMed

    Liu, Jianguo; Zhang, Zhiqiang; Liu, Zhiqiang; Zhu, Hu; Dang, Hongyue; Lu, Jianren; Cui, Zhanfeng

    2011-10-01

    The aim of this work was to optimize the fermentation parameters in the shake-flask culture of marine bacterium Wangia sp. C52 to increase cold-adapted amylase production using two statistical experimental methods including Plackett-Burman design, which was applied to find the key ingredients for the best medium composition, and response surface methodology, which was used to determine the optimal concentrations of these components. The results showed starch, tryptone, and initial pH had significant effects on the cold-adapted amylase production. A central composite design was then employed to further optimize these three factors. The experimental results indicated that the optimized composition of medium was 6.38 g  L(-1) starch, 33.84 g  L(-1) tryptone, 3.00 g  L(-1) yeast extract, 30 g  L(-1) NaCl, 0.60 g  L(-1) MgSO(4) and 0.56 g  L(-1) CaCl(2). The optimized cultivation conditions for amylase production were pH 7.18, a temperature of 20°C, and a shaking speed of 180 rpm. Under the proposed optimized conditions, the amylase experimental yield (676.63 U  mL(-1)) closely matched the yield (685.60 U  mL(-1)) predicted by the statistical model. The optimization of the medium contributed to tenfold higher amylase production than that of the control in shake-flask experiments. PMID:21365455

  1. Adaptation of the CVT algorithm for catheter optimization in high dose rate brachytherapy

    SciTech Connect

    Poulin, Eric; Fekete, Charles-Antoine Collins; Beaulieu, Luc; Létourneau, Mélanie; Fenster, Aaron; Pouliot, Jean

    2013-11-15

    Purpose: An innovative, simple, and fast method to optimize the number and position of catheters is presented for prostate and breast high dose rate (HDR) brachytherapy, both for arbitrary templates or template-free implants (such as robotic templates).Methods: Eight clinical cases were chosen randomly from a bank of patients, previously treated in our clinic to test our method. The 2D Centroidal Voronoi Tessellations (CVT) algorithm was adapted to distribute catheters uniformly in space, within the maximum external contour of the planning target volume. The catheters optimization procedure includes the inverse planning simulated annealing algorithm (IPSA). Complete treatment plans can then be generated from the algorithm for different number of catheters. The best plan is chosen from different dosimetry criteria and will automatically provide the number of catheters and their positions. After the CVT algorithm parameters were optimized for speed and dosimetric results, it was validated against prostate clinical cases, using clinically relevant dose parameters. The robustness to implantation error was also evaluated. Finally, the efficiency of the method was tested in breast interstitial HDR brachytherapy cases.Results: The effect of the number and locations of the catheters on prostate cancer patients was studied. Treatment plans with a better or equivalent dose distributions could be obtained with fewer catheters. A better or equal prostate V100 was obtained down to 12 catheters. Plans with nine or less catheters would not be clinically acceptable in terms of prostate V100 and D90. Implantation errors up to 3 mm were acceptable since no statistical difference was found when compared to 0 mm error (p > 0.05). No significant difference in dosimetric indices was observed for the different combination of parameters within the CVT algorithm. A linear relation was found between the number of random points and the optimization time of the CVT algorithm. Because the

  2. The Astronaut-Athlete: Optimizing Human Performance in Space.

    PubMed

    Hackney, Kyle J; Scott, Jessica M; Hanson, Andrea M; English, Kirk L; Downs, Meghan E; Ploutz-Snyder, Lori L

    2015-12-01

    It is well known that long-duration spaceflight results in deconditioning of neuromuscular and cardiovascular systems, leading to a decline in physical fitness. On reloading in gravitational environments, reduced fitness (e.g., aerobic capacity, muscular strength, and endurance) could impair human performance, mission success, and crew safety. The level of fitness necessary for the performance of routine and off-nominal terrestrial mission tasks remains an unanswered and pressing question for scientists and flight physicians. To mitigate fitness loss during spaceflight, resistance and aerobic exercise are the most effective countermeasure available to astronauts. Currently, 2.5 h·d, 6-7 d·wk is allotted in crew schedules for exercise to be performed on highly specialized hardware on the International Space Station (ISS). Exercise hardware provides up to 273 kg of loading capability for resistance exercise, treadmill speeds between 0.44 and 5.5 m·s, and cycle workloads from 0 and 350 W. Compared to ISS missions, future missions beyond low earth orbit will likely be accomplished with less vehicle volume and power allocated for exercise hardware. Concomitant factors, such as diet and age, will also affect the physiologic responses to exercise training (e.g., anabolic resistance) in the space environment. Research into the potential optimization of exercise countermeasures through use of dietary supplementation, and pharmaceuticals may assist in reducing physiological deconditioning during long-duration spaceflight and have the potential to enhance performance of occupationally related astronaut tasks (e.g., extravehicular activity, habitat construction, equipment repairs, planetary exploration, and emergency response). PMID:26595138

  3. Neural network-based optimal adaptive output feedback control of a helicopter UAV.

    PubMed

    Nodland, David; Zargarzadeh, Hassan; Jagannathan, Sarangapani

    2013-07-01

    Helicopter unmanned aerial vehicles (UAVs) are widely used for both military and civilian operations. Because the helicopter UAVs are underactuated nonlinear mechanical systems, high-performance controller design for them presents a challenge. This paper introduces an optimal controller design via an output feedback for trajectory tracking of a helicopter UAV, using a neural network (NN). The output-feedback control system utilizes the backstepping methodology, employing kinematic and dynamic controllers and an NN observer. The online approximator-based dynamic controller learns the infinite-horizon Hamilton-Jacobi-Bellman equation in continuous time and calculates the corresponding optimal control input by minimizing a cost function, forward-in-time, without using the value and policy iterations. Optimal tracking is accomplished by using a single NN utilized for the cost function approximation. The overall closed-loop system stability is demonstrated using Lyapunov analysis. Finally, simulation results are provided to demonstrate the effectiveness of the proposed control design for trajectory tracking. PMID:24808521

  4. Adaptive GSA-based optimal tuning of PI controlled servo systems with reduced process parametric sensitivity, robust stability and controller robustness.

    PubMed

    Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan

    2014-11-01

    This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system. PMID:25330468

  5. Adaptive Optimal Kernel Smooth-Windowed Wigner-Ville Distribution for Digital Communication Signal

    NASA Astrophysics Data System (ADS)

    Tan, Jo Lynn; Sha'ameri, Ahmad Zuribin

    2009-12-01

    Time-frequency distributions (TFDs) are powerful tools to represent the energy content of time-varying signal in both time and frequency domains simultaneously but they suffer from interference due to cross-terms. Various methods have been described to remove these cross-terms and they are typically signal-dependent. Thus, there is no single TFD with a fixed window or kernel that can produce accurate time-frequency representation (TFR) for all types of signals. In this paper, a globally adaptive optimal kernel smooth-windowed Wigner-Ville distribution (AOK-SWWVD) is designed for digital modulation signals such as ASK, FSK, and M-ary FSK, where its separable kernel is determined automatically from the input signal, without prior knowledge of the signal. This optimum kernel is capable of removing the cross-terms and maintaining accurate time-frequency representation at SNR as low as 0 dB. It is shown that this system is comparable to the system with prior knowledge of the signal.

  6. Charging Guidance of Electric Taxis Based on Adaptive Particle Swarm Optimization

    PubMed Central

    Niu, Liyong; Zhang, Di

    2015-01-01

    Electric taxis are playing an important role in the application of electric vehicles. The actual operational data of electric taxis in Shenzhen, China, is analyzed, and, in allusion to the unbalanced time availability of the charging station equipment, the electric taxis charging guidance system is proposed basing on the charging station information and vehicle information. An electric taxis charging guidance model is established and guides the charging based on the positions of taxis and charging stations with adaptive mutation particle swarm optimization. The simulation is based on the actual data of Shenzhen charging stations, and the results show that electric taxis can be evenly distributed to the appropriate charging stations according to the charging pile numbers in charging stations after the charging guidance. The even distribution among the charging stations in the area will be achieved and the utilization of charging equipment will be improved, so the proposed charging guidance method is verified to be feasible. The improved utilization of charging equipment can save public charging infrastructure resources greatly. PMID:26236770

  7. Optimizing phytoremediation of heavy metal-contaminated soil by exploiting plants' stress adaptation.

    PubMed

    Barocsi, Attila; Csintalan, Zsolt; Kocsanyi, Laszlo; Dushenkov, Slavik; Kuperberg, J Michael; Kucharski, Rafal; Richter, Peter I

    2003-01-01

    Soil phytoextraction is based on the ability of plants to extract contaminants from the soil. For less bioavailable metals, such as Pb, a chelator is added to the soil to mobilize the metal. The effect can be significant and in certain species, heavy metal accumulation can rapidly increase 10-fold. Accumulation of high levels of toxic metals may result in irreversible damage to the plant. Monitoring and controlling the phytotoxicity caused by EDTA-induced metal accumulation is crucial to optimize the remedial process, i.e. to achieve maximum uptake. We describe an EDTA-application procedure that minimizes phytotoxicity by increasing plant tolerance and allows phytoextraction of elevated levels of Pb and Cd. Brassica juncea is tested in soil with typical Pb and Cd concentrations of 500 mg kg-1 and 15 mg kg-1, respectively. Instead of a single dose treatment, the chelator is applied in multiple doses, that is, in several small increments, thus providing time for plants to initiate their adaptation mechanisms and raise their damage threshold. In situ monitoring of plant stress conditions by chlorophyll fluorescence recording allows for the identification of the saturating heavy metal accumulation process and of simultaneous plant deterioration. PMID:12710232

  8. Charging Guidance of Electric Taxis Based on Adaptive Particle Swarm Optimization.

    PubMed

    Niu, Liyong; Zhang, Di

    2015-01-01

    Electric taxis are playing an important role in the application of electric vehicles. The actual operational data of electric taxis in Shenzhen, China, is analyzed, and, in allusion to the unbalanced time availability of the charging station equipment, the electric taxis charging guidance system is proposed basing on the charging station information and vehicle information. An electric taxis charging guidance model is established and guides the charging based on the positions of taxis and charging stations with adaptive mutation particle swarm optimization. The simulation is based on the actual data of Shenzhen charging stations, and the results show that electric taxis can be evenly distributed to the appropriate charging stations according to the charging pile numbers in charging stations after the charging guidance. The even distribution among the charging stations in the area will be achieved and the utilization of charging equipment will be improved, so the proposed charging guidance method is verified to be feasible. The improved utilization of charging equipment can save public charging infrastructure resources greatly. PMID:26236770

  9. Performance Optimization of the Gasdynamic Mirror Propulsion System

    NASA Technical Reports Server (NTRS)

    Emrich, William J., Jr.; Kammash, Terry

    1999-01-01

    Nuclear fusion appears to be a most promising concept for producing extremely high specific impulse rocket engines. Engines such as these would effectively open up the solar system to human exploration and would virtually eliminate launch window restrictions. A preliminary vehicle sizing and mission study was performed based on the conceptual design of a Gasdynamic Mirror (GDM) fusion propulsion system. This study indicated that the potential specific impulse for this engine is approximately 142,000 sec. with about 22,100 N of thrust using a deuterium-tritium fuel cycle. The engine weight inclusive of the power conversion system was optimized around an allowable engine mass of 1500 Mg assuming advanced superconducting magnets and a Field Reversed Configuration (FRC) end plug at the mirrors. The vehicle habitat, lander, and structural weights are based on a NASA Mars mission study which assumes the use of nuclear thermal propulsion' Several manned missions to various planets were analyzed to determine fuel requirements and launch windows. For all fusion propulsion cases studied, the fuel weight remained a minor component of the total system weight regardless of when the missions commenced. In other words, the use of fusion propulsion virtually eliminates all mission window constraints and effectively allows unlimited manned exploration of the entire solar system. It also mitigates the need to have a large space infrastructure which would be required to support the transfer of massive amounts of fuel and supplies to lower a performing spacecraft.

  10. Performance optimization of the Gasdynamic mirror propulsion system

    NASA Astrophysics Data System (ADS)

    Emrich, William J.; Kammash, Terry

    2000-01-01

    Nuclear fusion appears to be a most promising concept for producing extremely high specific impulse rocket engines. Engines such as these would effectively open up the solar system to human exploration and would virtually eliminate launch window restrictions. A preliminary vehicle sizing and mission study was performed based on the conceptual design of a Gasdynamic Mirror (GDM) fusion propulsion system. This study indicated that the potential specific impulse for this engine is approximately 142,000 sec. with about 22,100 N of thrust using a deuterium-tritium fuel cycle. The engine weight inclusive of the power conversion system was optimized around an allowable engine mass of 1500 Mg assuming advanced superconducting magnets and a Field Reversed Configuration (FRC) end plug at the mirrors. The vehicle habitat, lander, and structural weights are based on a NASA Mars mission study which assumes the use of nuclear thermal propulsion. Several manned missions to various planets were analyzed to determine fuel requirements and launch windows. For all fusion propulsion cases studied, the fuel weight remained a minor component of the total system weight regardless of when the missions commenced. In other words, the use of fusion propulsion virtually eliminates all mission window constraints and effectively allows unlimited manned exploration of the entire solar system. It also mitigates the need to have a large space infrastructure which would be required to support the transfer of massive amounts of fuel and supplies to lower a performing spacecraft. .

  11. Cyclone performance and optimization: First quarterly progress report

    SciTech Connect

    Leith, D.

    1987-12-15

    The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is impoortant because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. The project is on or ahead of schedule. During this time, the laboratory scale equipment necessary for this project has been constructed and used to make measurements of the gas flow pattern within cyclones. Tangential gas velocities for a matrix of eleven different cuclones and operating conditions have been measured. For each different test condition tangential velocities over a wide range of axial and radial positions have been measured. In addition, the literature search that began while the proposal for this work was written has been continued. The computer and printer necessary for modeling the experimental results have been ordered and received. 1 fig.

  12. EXPLOITATION AND OPTIMIZATION OF RESERVOIR PERFORMANCE IN HUNTON FORMATION, OKLAHOMA

    SciTech Connect

    Mohan Kelkar

    2002-09-30

    The main objectives of the proposed study are as follows: (1) To understand and evaluate an unusual primary oil production mechanism which results in decreasing (retrograde) oil cut (ROC) behavior as reservoir pressure declines. (2) To improve calculations of initial oil in place so as to determine the economic feasibility of completing and producing a well. (3) To optimize the location of new wells based on understanding of geological and petrophysical properties heterogeneities. (4) To evaluate various secondary recovery techniques for oil reservoirs producing from fractured formations. (5) To enhance the productivity of producing wells by using new completion techniques. These objectives are important for optimizing field performance from West Carney Field located in Lincoln County, Oklahoma. The field, which was discovered in 1980, produces from Hunton Formation in a shallow-shelf carbonate reservoir. The early development in the field was sporadic. Many of the initial wells were abandoned due to high water production and constraints in surface facilities for disposing excess produced water. The field development began in earnest in 1995 by Altex Resources. They had recognized that production from this field was only possible if large volumes of water can be disposed. Being able to dispose large amounts of water, Altex aggressively drilled several producers. With few exceptions, all these wells exhibited similar characteristics. The initial production indicated trace amount of oil and gas with mostly water as dominant phase. As the reservoir was depleted, the oil cut eventually improved, making the overall production feasible. The decreasing oil cut (ROC) behavior has not been well understood. However, the field has been subjected to intense drilling activity because of prior success of Altex Resources. In this work, we will investigate the primary production mechanism by conducting several core flood experiments. After collecting cores from representative

  13. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  14. Capsule Performance Optimization for the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Landen, Otto

    2009-11-01

    The overall goal of the capsule performance optimization campaign is to maximize the probability of ignition by experimentally correcting for likely residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. This will be accomplished using a variety of targets that will set key laser, hohlraum and capsule parameters to maximize ignition capsule implosion velocity, while minimizing fuel adiabat, core shape asymmetry and ablator-fuel mix. The targets include high Z re-emission spheres setting foot symmetry through foot cone power balance [1], liquid Deuterium-filled ``keyhole'' targets setting shock speed and timing through the laser power profile [2], symmetry capsules setting peak cone power balance and hohlraum length [3], and streaked x-ray backlit imploding capsules setting ablator thickness [4]. We will show how results from successful tuning technique demonstration shots performed at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design meet the required sensitivity and accuracy. We will also present estimates of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors, and show that these get reduced after a number of shots and iterations to meet an acceptable level of residual uncertainty. Finally, we will present results from upcoming tuning technique validation shots performed at NIF at near full-scale. Prepared by LLNL under Contract DE-AC52-07NA27344. [4pt] [1] E. Dewald, et. al. Rev. Sci. Instrum. 79 (2008) 10E903. [0pt] [2] T.R. Boehly, et. al., Phys. Plasmas 16 (2009) 056302. [0pt] [3] G. Kyrala, et. al., BAPS 53 (2008) 247. [0pt] [4] D. Hicks, et. al., BAPS 53 (2008) 2.

  15. Interactive Genetic Algorithm - An Adaptive and Interactive Decision Support Framework for Design of Optimal Groundwater Monitoring Plans

    NASA Astrophysics Data System (ADS)

    Babbar-Sebens, M.; Minsker, B. S.

    2006-12-01

    In the water resources management field, decision making encompasses many kinds of engineering, social, and economic constraints and objectives. Representing all of these problem dependant criteria through models (analytical or numerical) and various formulations (e.g., objectives, constraints, etc.) within an optimization- simulation system can be a very non-trivial issue. Most models and formulations utilized for discerning desirable traits in a solution can only approximate the decision maker's (DM) true preference criteria, and they often fail to consider important qualitative and incomputable phenomena related to the management problem. In our research, we have proposed novel decision support frameworks that allow DMs to actively participate in the optimization process. The DMs explicitly indicate their true preferences based on their subjective criteria and the results of various simulation models and formulations. The feedback from the DMs is then used to guide the search process towards solutions that are "all-rounders" from the perspective of the DM. The two main research questions explored in this work are: a) Does interaction between the optimization algorithm and a DM assist the system in searching for groundwater monitoring designs that are robust from the DM's perspective?, and b) How can an interactive search process be made more effective when human factors, such as human fatigue and cognitive learning processes, affect the performance of the algorithm? The application of these frameworks on a real-world groundwater long-term monitoring (LTM) case study in Michigan highlighted the following salient advantages: a) in contrast to the non-interactive optimization methodology, the proposed interactive frameworks were able to identify low cost monitoring designs whose interpolation maps respected the expected spatial distribution of the contaminants, b) for many same-cost designs, the interactive methodologies were able to propose multiple alternatives

  16. Performance benefits of adaptive, multimicrophone, interference-canceling systems in everyday environments

    NASA Astrophysics Data System (ADS)

    Desloge, Joseph G.; Zimmer, Martin J.; Zurek, Patrick M.

    2001-05-01

    Adaptive multimicrophone systems are currently used for a variety of noise-cancellation applications (such as hearing aids) to preserve signals arriving from a particular (target) direction while canceling other (jammer) signals in the environment. Although the performance of these systems is known to degrade with increasing reverberation, there are few measurements of adaptive performance in everyday reverberant environments. In this study, adaptive performance was compared to that of a simple, nonadaptive cardioid microphone to determine a measure of adaptive benefit. Both systems used recordings (at an Fs of 22050 Hz) from the same two omnidirectional microphones, which were separated by 1 cm. Four classes of environment were considered: outdoors, household, parking garage, and public establishment. Sources were either environmental noises (e.g., household appliances, restaurant noise) or a controlled noise source. In all situations, no target was present (i.e., all signals were jammers) to obtain maximal jammer cancellation. Adaptive processing was based upon the Griffiths-Jim generalized sidelobe canceller using filter lengths up to 400 points. Average intelligibility-weighted adaptive benefit levels at a source distance of 1 m were, at most, 1.5 dB for public establishments, 2 dB for household rooms and the parking garage, and 3 dB outdoors. [Work supported by NIOSH.

  17. Source-mask co-optimization: optimize design for imaging and impact of source complexity on lithography performance

    NASA Astrophysics Data System (ADS)

    Hsu, Stephen; Li, Zhipan; Chen, Luoqi; Gronlund, Keith; Liu, Hua-yu; Socha, Robert

    2009-12-01

    The co-optimization of the source and mask patterns [1, 2] is vital to future advanced ArF technology node development. This paper extends work previously reported on this topic [3, 4]. We will systematically study the impact of source on designs with different k1 values using SMO. Previous work compared the co-optimized versus iterative source-mask optimization methods [3]. We showed that the co-optimization method clearly improved lithography performance. This paper's approach consists of: 1) Co-optimize a pixelated freeform source and a continuous transmission gray tone mask based on a user specified cost function; 2) ASML-certified scanner-specific models and constraints are applied to the optimized source; 3) Assist feature (AF) "seeds" are identified from the optimized continuous transmission mask (CTM). Both the AF seed and the main feature are subsequently converted into a polygon mask; 4) The extracted AF seeds and main features are co-optimized with the source to achieve the best lithographic performance. Using this approach, we first use a DRAM brick wall design to demonstrate that using the same cost function metric by adjusting the optimization conditions creates an image log slope only optimization that can easily be applied. An optimize design for imaging methodology is introduced and shown to be important for low k1 imaging. A typical 2x node SRAM design is used to illustrate an integrated SMO design rule optimization flow. We use the same SRAM layout that used design rule optimization to study the source complexity impact with a range of k1 values that varies from 0.42 to 0.35. For the source type, we use freeform and traditional finite pole shape DOEs, all subject to ASML's scanner-specific models and constraints. We report the process window, MEF and process variation band (PV band) with different source types to find which source type give the best lithography performance.

  18. Optimizing timing performance of silicon photomultiplier-based scintillation detectors.

    PubMed

    Yeom, Jung Yeol; Vinke, Ruud; Levin, Craig S

    2013-02-21

    Precise timing resolution is crucial for applications requiring photon time-of-flight (ToF) information such as ToF positron emission tomography (PET). Silicon photomultipliers (SiPM) for PET, with their high output capacitance, are known to require custom preamplifiers to optimize timing performance. In this paper, we describe simple alternative front-end electronics based on a commercial low-noise RF preamplifier and methods that have been implemented to achieve excellent timing resolution. Two radiation detectors with L(Y)SO scintillators coupled to Hamamatsu SiPMs (MPPC S10362-33-050C) and front-end electronics based on an RF amplifier (MAR-3SM+), typically used for wireless applications that require minimal additional circuitry, have been fabricated. These detectors were used to detect annihilation photons from a Ge-68 source and the output signals were subsequently digitized by a high speed oscilloscope for offline processing. A coincident resolving time (CRT) of 147 ± 3 ps FWHM and 186 ± 3 ps FWHM with 3 × 3 × 5 mm(3) and with 3 × 3 × 20 mm(3) LYSO crystal elements were measured, respectively. With smaller 2 × 2 × 3 mm(3) LSO crystals, a CRT of 125 ± 2 ps FWHM was achieved with slight improvement to 121 ± 3 ps at a lower temperature (15° C). Finally, with the 20 mm length crystals, a degradation of timing resolution was observed for annihilation photon interactions that occur close to the photosensor compared to shallow depth-of-interaction (DOI). We conclude that commercial RF amplifiers optimized for noise, besides their ease of use, can produce excellent timing resolution comparable to best reported values acquired with custom readout electronics. On the other hand, as timing performance degrades with increasing photon DOI, a head-on detector configuration will produce better CRT than a side-irradiated setup for longer crystals. PMID:23369872

  19. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    PubMed

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  20. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  1. Smart Swarms of Bacteria-Inspired Agents with Performance Adaptable Interactions

    PubMed Central

    Shklarsh, Adi; Ariel, Gil; Schneidman, Elad; Ben-Jacob, Eshel

    2011-01-01

    Collective navigation and swarming have been studied in animal groups, such as fish schools, bird flocks, bacteria, and slime molds. Computer modeling has shown that collective behavior of simple agents can result from simple interactions between the agents, which include short range repulsion, intermediate range alignment, and long range attraction. Here we study collective navigation of bacteria-inspired smart agents in complex terrains, with adaptive interactions that depend on performance. More specifically, each agent adjusts its interactions with the other agents according to its local environment – by decreasing the peers' influence while navigating in a beneficial direction, and increasing it otherwise. We show that inclusion of such performance dependent adaptable interactions significantly improves the collective swarming performance, leading to highly efficient navigation, especially in complex terrains. Notably, to afford such adaptable interactions, each modeled agent requires only simple computational capabilities with short-term memory, which can easily be implemented in simple swarming robots. PMID:21980274

  2. Smart swarms of bacteria-inspired agents with performance adaptable interactions.

    PubMed

    Shklarsh, Adi; Ariel, Gil; Schneidman, Elad; Ben-Jacob, Eshel

    2011-09-01

    Collective navigation and swarming have been studied in animal groups, such as fish schools, bird flocks, bacteria, and slime molds. Computer modeling has shown that collective behavior of simple agents can result from simple interactions between the agents, which include short range repulsion, intermediate range alignment, and long range attraction. Here we study collective navigation of bacteria-inspired smart agents in complex terrains, with adaptive interactions that depend on performance. More specifically, each agent adjusts its interactions with the other agents according to its local environment--by decreasing the peers' influence while navigating in a beneficial direction, and increasing it otherwise. We show that inclusion of such performance dependent adaptable interactions significantly improves the collective swarming performance, leading to highly efficient navigation, especially in complex terrains. Notably, to afford such adaptable interactions, each modeled agent requires only simple computational capabilities with short-term memory, which can easily be implemented in simple swarming robots. PMID:21980274

  3. Link performance optimization for digital satellite broadcasting systems

    NASA Astrophysics Data System (ADS)

    de Gaudenzi, R.; Elia, C.; Viola, R.

    The authors introduce the concept of digital direct satellite broadcasting (D-DBS), which allows unprecedented flexibility by providing a large number of audiovisual services. The concept assumes an information rate of 40 Mb/s, which is compatible with practically all present-day transponders. After discussion of the general system concept, the results of transmission system optimization are presented. Channel and interference effects are taken into account. Numerical results show that the scheme with the best performance is trellis-coded 8-PSK (phase shift keying) modulation concatenated with Reed-Solomon block code. For a net data rate of 40 Mb/s a bit error rate of 10-10 can be achieved with an equivalent bit energy to noise density of 9.5 dB, including channel, interference, and demodulator impairments. A link budget analysis shows how a medium-power direct-to-home TV satellite can provide multimedia services to users equipped with small (60-cm) dish antennas.

  4. Optimal thermal-hydraulic performance for helium-cooled divertors

    SciTech Connect

    Izenson, M.G.; Martin, J.L.

    1996-07-01

    Normal flow heat exchanger (NFHX) technology offers the potential for cooling divertor panels with reduced pressure drops (<0.5% {Delta}p/p), reduced pumping power (<0.75% pumping/thermal power), and smaller duct sizes than conventional helium heat exchangers. Furthermore, the NFHX can easily be fabricated in the large sizes required for divertors in large tokamaks. Recent experimental and computational results from a program to develop NFHX technology for divertor coolings using porous metal heat transfer media are described. We have tested the thermal and flow characteristics of porous metals and identified the optimal heat transfer material for the divertor heat exchanger. Methods have been developed to create highly conductive thermal bonds between the porous material and a solid substrate. Computational fluid dynamics calculations of flow and heat transfer in the porous metal layer have shown the capability of high thermal effectiveness. An 18-kW NFHX, designed to meet specifications for the international Thermonuclear Experimental Reactor divertor, has been fabricated and tested for thermal and flow performance. Preliminary results confirm design and fabrication methods. 11 refs., 12 figs., 1 tab.

  5. Development of a real-time transport performance optimization methodology

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn

    1996-01-01

    The practical application of real-time performance optimization is addressed (using a wide-body transport simulation) based on real-time measurements and calculation of incremental drag from forced response maneuvers. Various controller combinations can be envisioned although this study used symmetric outboard aileron and stabilizer. The approach is based on navigation instrumentation and other measurements found on state-of-the-art transports. This information is used to calculate winds and angle of attack. Thrust is estimated from a representative engine model as a function of measured variables. The lift and drag equations are then used to calculate lift and drag coefficients. An expression for drag coefficient, which is a function of parasite drag, induced drag, and aileron drag, is solved from forced excitation response data. Estimates of the parasite drag, curvature of the aileron drag variation, and minimum drag aileron position are produced. Minimum drag is then obtained by repositioning the symmetric aileron. Simulation results are also presented which evaluate the affects of measurement bias and resolution.

  6. Introduction beyond a species range: a relationship between population origin, adaptive potential and plant performance

    PubMed Central

    Volis, S; Ormanbekova, D; Yermekbayev, K; Song, M; Shulgina, I

    2014-01-01

    The adaptive potential of a population defines its importance for species survival in changing environmental conditions such as global climate change. Very few empirical studies have examined adaptive potential across species' ranges, namely, of edge vs core populations, and we are unaware of a study that has tested adaptive potential (namely, variation in adaptive traits) and measured performance of such populations in conditions not currently experienced by the species but expected in the future. Here we report the results of a Triticum dicoccoides population study that employed transplant experiments and analysis of quantitative trait variation. Two populations at the opposite edges of the species range (1) were locally adapted; (2) had lower adaptive potential (inferred from the extent of genetic quantitative trait variation) than the two core populations; and (3) were outperformed by the plants from the core population in the novel environment. The fact that plants from the species arid edge performed worse than plants from the more mesic core in extreme drought conditions beyond the present climatic envelope of the species implies that usage of peripheral populations for conservation purposes must be based on intensive sampling of among-population variation. PMID:24690758

  7. How to optimize vitamin D supplementation to prevent cancer, based on cellular adaptation and hydroxylase enzymology.

    PubMed

    Vieth, Reinhold

    2009-09-01

    The question of what makes an 'optimal' vitamin D intake is usually equivalent to, 'what serum 25-hydroxyvitamin D [25(OH)D] do we need to stay above to minimize risk of disease?'. This is a simplistic question that ignores the evidence that fluctuating concentrations of 25(OH)D may in themselves be a problem, even if concentrations do exceed a minimum desirable level. Vitamin D metabolism poses unique problems for the regulation of 1,25-dihydroxyvitamin D [1,25(OH)2D] concentrations in the tissues outside the kidney that possess 25(OH)D-1-hydroxylase [CYP27B1] and the catabolic enzyme, 1,25(OH)2D-24-hydroxylase [CYP24]. These enzymes behave according to first-order reaction kinetics. When 25(OH)D declines, the ratio of 1-hydroxylase/24-hydroxylase must increase to maintain tissue 1,25(OH)2D at its set-point level. The mechanisms that regulate this paracrine metabolism are poorly understood. I propose that delay in cellular adaptation, or lag time, in response to fluctuating 25(OH)D concentrations can explain why higher 25(OH)D in regions at high latitude or with low environmental ultraviolet light can be associated with the greater risks reported for prostate and pancreatic cancers. At temperate latitudes, higher summertime 25(OH)D levels are followed by sharper declines in 25(OH)D, causing inappropriately low 1-hydroxylase and high 24-hydroxylase, resulting in tissue 1,25(OH)2D below its ideal set-point. This hypothesis can answer concerns raised by the World Health Organization's International Agency for Research on Cancer about vitamin D and cancer risk. It also explains why higher 25(OH)D concentrations are not good if they fluctuate, and that desirable 25(OH)D concentrations are ones that are both high and stable. PMID:19667164

  8. Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques.

    PubMed

    Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi

    2016-04-21

    We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. PMID:27008349

  9. Microstructural modeling and design optimization of adaptive thin-film nanocomposite coatings for durability and wear

    NASA Astrophysics Data System (ADS)

    Pearson, James Deon

    Adaptive thin-film nanocomposite coatings comprised of crystalline ductile phases of gold and molybdenum disulfide, and brittle phases of diamond like carbon (DLC) and ytrria stabilized zirconia (YSZ) have been investigated by specialized microstructurally-based finite-element techniques. A new microstructural computational technique for efficiently creating models of nanocomposite coatings with control over composition, grain size, spacing and morphologies has been developed to account for length scales that range from nanometers to millimeters for efficient computations. The continuum mechanics model at the nanometer scale was verified with molecular dynamic models for nanocrystalline diamond. Using this new method, the interrelated effects of microstructural characteristics such as grain shapes and sizes, matrix thicknesses, local material behavior due to interfacial stresses and strains, varying amorphous and crystalline compositions, and transfer film adhesion and thickness on coating behavior have been investigated. A mechanistic model to account for experimentally observed transfer film adhesion modes and changes in thickness was also developed. One of the major objectives of this work is to determine optimal crystalline and amorphous compositions and behavior related to wear and durability over a wide range of thermo-mechanical conditions. The computational predictions, consistent with experimental observations, indicate specific interfacial regions between DLC and ductile metal inclusions are critical regions of stress and strain accumulation that can be precursors to material failure and wear. The predicted results underscore a competition between the effects of superior tribological properties associated with MoS 2 and maintaining manageable stress levels that would not exceed the coating strength. Varying the composition results in tradeoffs between lubrication, toughness, and strength, and the effects of critical stresses and strains can be controlled

  10. Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques

    NASA Astrophysics Data System (ADS)

    Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E.; Lo, Yeh-Chi

    2016-04-01

    We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as  -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients.

  11. Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan

    1997-01-01

    A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.

  12. Fuzzy Sets in Dynamic Adaptation of Parameters of a Bee Colony Optimization for Controlling the Trajectory of an Autonomous Mobile Robot.

    PubMed

    Amador-Angulo, Leticia; Mendoza, Olivia; Castro, Juan R; Rodríguez-Díaz, Antonio; Melin, Patricia; Castillo, Oscar

    2016-01-01

    A hybrid approach composed by different types of fuzzy systems, such as the Type-1 Fuzzy Logic System (T1FLS), Interval Type-2 Fuzzy Logic System (IT2FLS) and Generalized Type-2 Fuzzy Logic System (GT2FLS) for the dynamic adaptation of the alpha and beta parameters of a Bee Colony Optimization (BCO) algorithm is presented. The objective of the work is to focus on the BCO technique to find the optimal distribution of the membership functions in the design of fuzzy controllers. We use BCO specifically for tuning membership functions of the fuzzy controller for trajectory stability in an autonomous mobile robot. We add two types of perturbations in the model for the Generalized Type-2 Fuzzy Logic System to better analyze its behavior under uncertainty and this shows better results when compared to the original BCO. We implemented various performance indices; ITAE, IAE, ISE, ITSE, RMSE and MSE to measure the performance of the controller. The experimental results show better performances using GT2FLS then by IT2FLS and T1FLS in the dynamic adaptation the parameters for the BCO algorithm. PMID:27618062

  13. A zonal computational procedure adapted to the optimization of two-dimensional thrust augmentor inlets

    NASA Technical Reports Server (NTRS)

    Lund, T. S.; Tavella, D. A.; Roberts, L.

    1985-01-01

    A viscous-inviscid interaction methodology based on a zonal description of the flowfield is developed as a mean of predicting the performance of two-dimensional thrust augmenting ejectors. An inviscid zone comprising the irrotational flow about the device is patched together with a viscous zone containing the turbulent mixing flow. The inviscid region is computed by a higher order panel method, while an integral method is used for the description of the viscous part. A non-linear, constrained optimization study is undertaken for the design of the inlet region. In this study, the viscous-inviscid analysis is complemented with a boundary layer calculation to account for flow separation from the walls of the inlet region. The thrust-based Reynolds number as well as the free stream velocity are shown to be important parameters in the design of a thrust augmentor inlet.

  14. Adaptive control of nonlinear uncertain active suspension systems with prescribed performance.

    PubMed

    Huang, Yingbo; Na, Jing; Wu, Xing; Liu, Xiaoqin; Guo, Yu

    2015-01-01

    This paper proposes adaptive control designs for vehicle active suspension systems with unknown nonlinear dynamics (e.g., nonlinear spring and piece-wise linear damper dynamics). An adaptive control is first proposed to stabilize the vertical vehicle displacement and thus to improve the ride comfort and to guarantee other suspension requirements (e.g., road holding and suspension space limitation) concerning the vehicle safety and mechanical constraints. An augmented neural network is developed to online compensate for the unknown nonlinearities, and a novel adaptive law is developed to estimate both NN weights and uncertain model parameters (e.g., sprung mass), where the parameter estimation error is used as a leakage term superimposed on the classical adaptations. To further improve the control performance and simplify the parameter tuning, a prescribed performance function (PPF) characterizing the error convergence rate, maximum overshoot and steady-state error is used to propose another adaptive control. The stability for the closed-loop system is proved and particular performance requirements are analyzed. Simulations are included to illustrate the effectiveness of the proposed control schemes. PMID:25034649

  15. Performance bounds on micro-Doppler estimation and adaptive waveform design using OFDM signals

    NASA Astrophysics Data System (ADS)

    Sen, Satyabrata; Barhen, Jacob; Glover, Charles W.

    2014-05-01

    We analyze the performance of a wideband orthogonal frequency division multiplexing (OFDM) signal in estimating the micro-Doppler frequency of a target having multiple rotating scatterers (e.g., rotor blades of a helicopter, propellers of a submarine). The presence of rotating scatterers introduces Doppler frequency modulation in the received signal by generating sidebands about the transmitted frequencies. This is called the micro-Doppler effects. The use of a frequency-diverse OFDM signal in this context enables us to independently analyze the micro-Doppler characteristics with respect to a set of orthogonal subcarrier frequencies. Therefore, to characterize the accuracy of micro-Doppler frequency estimation, we compute the Craḿer-Rao Bound (CRB) on the angular-velocity estimate of the target while considering the scatterer responses as deterministic but unknown nuisance parameters. Additionally, to improve the accuracy of the estimation procedure, we formulate and solve an optimization problem by minimizing the CRB on the angular-velocity estimate with respect to the transmitting OFDM spectral coefficients. We present several numerical examples to demonstrate the CRB variations at different values of the signal-to-noise ratio (SNR) and the number of OFDM subcarriers. The CRB values not only decrease with the increase in the SNR values, but also reduce as we increase the number of subcarriers implying the significance of frequency-diverse OFDM waveforms. The improvement in estimation accuracy due to the adaptive waveform design is also numerically analyzed. Interestingly, we find that the relative decrease in the CRBs on the angular-velocity estimate is more pronounced for larger number of OFDM subcarriers.

  16. Performance Bounds on Micro-Doppler Estimation and Adaptive Waveform Design Using OFDM Signals

    SciTech Connect

    Sen, Satyabrata; Barhen, Jacob; Glover, Charles Wayne

    2014-01-01

    We analyze the performance of a wideband orthogonal frequency division multiplexing (OFDM) signal in estimating the micro-Doppler frequency of a target having multiple rotating scatterers (e.g., rotor blades of a helicopter, propellers of a submarine). The presence of rotating scatterers introduces Doppler frequency modulation in the received signal by generating sidebands about the transmitted frequencies. This is called the micro-Doppler effects. The use of a frequency-diverse OFDM signal in this context enables us to independently analyze the micro-Doppler characteristics with respect to a set of orthogonal subcarrier frequencies. Therefore, to characterize the accuracy of micro-Doppler frequency estimation, we compute the Cram er-Rao Bound (CRB) on the angular-velocity estimate of the target while considering the scatterer responses as deterministic but unknown nuisance parameters. Additionally, to improve the accuracy of the estimation procedure, we formulate and solve an optimization problem by minimizing the CRB on the angular-velocity estimate with respect to the transmitting OFDM spectral coefficients. We present several numerical examples to demonstrate the CRB variations at different values of the signal-to-noise ratio (SNR) and the number of OFDM subcarriers. The CRB values not only decrease with the increase in the SNR values, but also reduce as we increase the number of subcarriers implying the significance of frequency-diverse OFDM waveforms. The improvement in estimation accuracy due to the adaptive waveform design is also numerically analyzed. Interestingly, we find that the relative decrease in the CRBs on the angular-velocity estimate is more pronounced for larger number of OFDM subcarriers.

  17. SWAT system performance predictions. Project report. [SWAT (Short-Wavelength Adaptive Techniques)

    SciTech Connect

    Parenti, R.R.; Sasiela, R.J.

    1993-03-10

    In the next phase of Lincoln Laboratory's SWAT (Short-Wavelength Adaptive Techniques) program, the performance of a 241-actuator adaptive-optics system will be measured using a variety of synthetic-beacon geometries. As an aid in this experimental investigation, a detailed set of theoretical predictions has also been assembled. The computational tools that have been applied in this study include a numerical approach in which Monte-Carlo ray-trace simulations of accumulated phase error are developed, and an analytical analysis of the expected system behavior. This report describes the basis of these two computational techniques and compares their estimates of overall system performance. Although their regions of applicability tend to be complementary rather than redundant, good agreement is usually obtained when both sets of results can be derived for the same engagement scenario.... Adaptive optics, Phase conjugation, Atmospheric turbulence Synthetic beacon, Laser guide star.

  18. EEG/ERP adaptive noise canceller design with controlled search space (CSS) approach in cuckoo and other optimization algorithms.

    PubMed

    Ahirwal, M K; Kumar, Anil; Singh, G K

    2013-01-01

    This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/event-related potential noise cancellation or extraction. A new approach is proposed in the form of controlled search space to stabilize the randomness of swarm intelligence techniques especially for the EEG signal. Swarm-based algorithms such as Particles Swarm Optimization, Artificial Bee Colony, and Cuckoo Optimization Algorithm with their variants are implemented to design optimized adaptive noise canceler. The proposed controlled search space technique is tested on each of the swarm intelligence techniques and is found to be more accurate and powerful. Adaptive noise canceler with traditional algorithms such as least-mean-square, normalized least-mean-square, and recursive least-mean-square algorithms are also implemented to compare the results. ERP signals such as simulated visual evoked potential, real visual evoked potential, and real sensorimotor evoked potential are used, due to their physiological importance in various EEG studies. Average computational time and shape measures of evolutionary techniques are observed 8.21E-01 sec and 1.73E-01, respectively. Though, traditional algorithms take negligible time consumption, but are unable to offer good shape preservation of ERP, noticed as average computational time and shape measure difference, 1.41E-02 sec and 2.60E+00, respectively. PMID:24407307

  19. Advancing adaptive optics technology: Laboratory turbulence simulation and optimization of laser guide stars

    NASA Astrophysics Data System (ADS)

    Rampy, Rachel A.

    Since Galileo's first telescope some 400 years ago, astronomers have been building ever-larger instruments. Yet only within the last two decades has it become possible to realize the potential angular resolutions of large ground-based telescopes, by using adaptive optics (AO) technology to counter the blurring effects of Earth's atmosphere. And only within the past decade have the development of laser guide stars (LGS) extended AO capabilities to observe science targets nearly anywhere in the sky. Improving turbulence simulation strategies and LGS are the two main topics of my research. In the first part of this thesis, I report on the development of a technique for manufacturing phase plates for simulating atmospheric turbulence in the laboratory. The process involves strategic application of clear acrylic paint onto a transparent substrate. Results of interferometric characterization of the plates are described and compared to Kolmogorov statistics. The range of r0 (Fried's parameter) achieved thus far is 0.2--1.2 mm at 650 nm measurement wavelength, with a Kolmogorov power law. These plates proved valuable at the Laboratory for Adaptive Optics at University of California, Santa Cruz, where they have been used in the Multi-Conjugate Adaptive Optics testbed, during integration and testing of the Gemini Planet Imager, and as part of the calibration system of the on-sky AO testbed named ViLLaGEs (Visible Light Laser Guidestar Experiments). I present a comparison of measurements taken by ViLLaGEs of the power spectrum of a plate and the real sky turbulence. The plate is demonstrated to follow Kolmogorov theory well, while the sky power spectrum does so in a third of the data. This method of fabricating phase plates has been established as an effective and low-cost means of creating simulated turbulence. Due to the demand for such devices, they are now being distributed to other members of the AO community. The second topic of this thesis pertains to understanding and

  20. Advancing adaptive optics technology: Laboratory turbulence simulation and optimization of laser guide stars

    NASA Astrophysics Data System (ADS)

    Rampy, Rachel A.

    Since Galileo's first telescope some 400 years ago, astronomers have been building ever-larger instruments. Yet only within the last two decades has it become possible to realize the potential angular resolutions of large ground-based telescopes, by using adaptive optics (AO) technology to counter the blurring effects of Earth's atmosphere. And only within the past decade have the development of laser guide stars (LGS) extended AO capabilities to observe science targets nearly anywhere in the sky. Improving turbulence simulation strategies and LGS are the two main topics of my research. In the first part of this thesis, I report on the development of a technique for manufacturing phase plates for simulating atmospheric turbulence in the laboratory. The process involves strategic application of clear acrylic paint onto a transparent substrate. Results of interferometric characterization of the plates are described and compared to Kolmogorov statistics. The range of r0 (Fried's parameter) achieved thus far is 0.2--1.2 mm at 650 nm measurement wavelength, with a Kolmogorov power law. These plates proved valuable at the Laboratory for Adaptive Optics at University of California, Santa Cruz, where they have been used in the Multi-Conjugate Adaptive Optics testbed, during integration and testing of the Gemini Planet Imager, and as part of the calibration system of the on-sky AO testbed named ViLLaGEs (Visible Light Laser Guidestar Experiments). I present a comparison of measurements taken by ViLLaGEs of the power spectrum of a plate and the real sky turbulence. The plate is demonstrated to follow Kolmogorov theory well, while the sky power spectrum does so in a third of the data. This method of fabricating phase plates has been established as an effective and low-cost means of creating simulated turbulence. Due to the demand for such devices, they are now being distributed to other members of the AO community. The second topic of this thesis pertains to understanding and

  1. Adaptive Effects on Locomotion Performance Following Exposure to a Rotating Virtual Environment

    NASA Technical Reports Server (NTRS)

    Mulavara, A. P.; Richards, J. T.; Marshburn, A. M.; Bucello, R.; Bloomberg, J. J.

    2003-01-01

    adaptive generalization. The purpose of this study was to determine if adaptive modification in locomotor performance could be achieved by viewing simulated self-motion in a passive-immersive virtual ' environment over a prolonged period during treadmill locomotion.

  2. Study of the sealing performance of tubing adapters in gas-tight deep-sea water sampler

    NASA Astrophysics Data System (ADS)

    Huang, Haocai; Yuan, Zhouli; Kang, Wuchen; Xue, Zhao; Chen, Xihao; Yang, Canjun; Ye, Yanying; Leng, Jianxing

    2014-09-01

    Tubing adapter is a key connection device in Gas-Tight Deep-Sea Water Sampler (GTWS). The sealing performance of the tubing adapter directly affects the GTWS's overall gas tightness. Tubing adapters with good sealing performance can ensure the transmission of seawater samples without gas leakage and can be repeatedly used. However, the sealing performance of tubing adapters made of different materials was not studied sufficiently. With the research discussed in this paper, the materials match schemes of the tubing adapters were proposed. With non-linear finite element contact analysis and sea trials in the South China Sea, it is expected that the recommended materials match schemes not only meet the requirements of tubing adapters' sealing performance but also provide the feasible options for the following research on tubing adapters in GTWS

  3. Restricted and Adaptive Masculine Gender Performance in White Gay College Men

    ERIC Educational Resources Information Center

    Anderson-Martinez, Richard; Vianden, Jörg

    2014-01-01

    This article presents the results of a qualitative exploration of the performance of masculine gender identities in six gay male students enrolled at a master's comprehensive public institution in the Midwest. This article builds on the work of Laker and Davis (2011) and Rankin (2005). The findings indicate participants adapted their gender…

  4. Adapting Objective Structured Clinical Examinations to Assess Social Work Students' Performance and Reflections

    ERIC Educational Resources Information Center

    Bogo, Marion; Regehr, Cheryl; Logie, Carmen; Katz, Ellen; Mylopoulos, Maria; Regehr, Glenn

    2011-01-01

    The development of standardized, valid, and reliable methods for assessment of students' practice competence continues to be a challenge for social work educators. In this study, the Objective Structured Clinical Examination (OSCE), originally used in medicine to assess performance through simulated interviews, was adapted for social work to…

  5. Fast computation of an optimal controller for large-scale adaptive optics.

    PubMed

    Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc

    2011-11-01

    The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported. PMID:22048298

  6. Intelligent Modeling Combining Adaptive Neuro Fuzzy Inference System and Genetic Algorithm for Optimizing Welding Process Parameters

    NASA Astrophysics Data System (ADS)

    Gowtham, K. N.; Vasudevan, M.; Maduraimuthu, V.; Jayakumar, T.

    2011-04-01

    Modified 9Cr-1Mo ferritic steel is used as a structural material for steam generator components of power plants. Generally, tungsten inert gas (TIG) welding is preferred for welding of these steels in which the depth of penetration achievable during autogenous welding is limited. Therefore, activated flux TIG (A-TIG) welding, a novel welding technique, has been developed in-house to increase the depth of penetration. In modified 9Cr-1Mo steel joints produced by the A-TIG welding process, weld bead width, depth of penetration, and heat-affected zone (HAZ) width play an important role in determining the mechanical properties as well as the performance of the weld joints during service. To obtain the desired weld bead geometry and HAZ width, it becomes important to set the welding process parameters. In this work, adaptative neuro fuzzy inference system is used to develop independent models correlating the welding process parameters like current, voltage, and torch speed with weld bead shape parameters like depth of penetration, bead width, and HAZ width. Then a genetic algorithm is employed to determine the optimum A-TIG welding process parameters to obtain the desired weld bead shape parameters and HAZ width.

  7. Optimal hematocrit for maximal exercise performance in acute and chronic erythropoietin-treated mice.

    PubMed

    Schuler, Beat; Arras, Margarete; Keller, Stephan; Rettich, Andreas; Lundby, Carsten; Vogel, Johannes; Gassmann, Max

    2010-01-01

    Erythropoietin (Epo) treatment increases hematocrit (Htc) and, consequently, arterial O(2) content. This in turn improves exercise performance. However, because elevated blood viscosity associated with increasing Htc levels may limit cardiac performance, it was suggested that the highest attainable Htc may not necessarily be associated with the highest attainable exercise capacity. To test the proposed hypothesis that an optimal Htc in acute and chronic Epo-treated mice exists--i.e., the Htc that facilitates the greatest O(2) flux during maximal exercise--Htc levels of wild-type mice were acutely elevated by administering novel erythropoiesis-stimulating protein (NESP; wtNESP). Furthermore, in the transgenic mouse line tg6 that reaches Htc levels of up to 0.9 because of constitutive overexpression of human Epo, the Htc was gradually reduced by application of the hemolysis-inducing compound phenylhydrazine (PHZ; tg6PHZ). Maximal cardiovascular performance was measured by using telemetry in all exercising mice. Highest maximal O(2) uptake (VO(2max)) and maximal time to exhaustion at submaximal exercise intensities were reached at Htc values of 0.58 and 0.57 for wtNESP, and 0.68 and 0.66 for tg6PHZ, respectively. Rate pressure product, and thus also maximal working capacity of the heart, increased with elevated Htc values. Blood viscosity correlated with VO(2max). Apart from the confirmation of the Htc hypothesis, we conclude that tg6PHZ adapted better to varying Htc values than wtNESP because of the higher optimal Htc of tg6PHZ compared to wtNESP. Of note, blood viscosity plays a critical role in limiting exercise capacity. PMID:19966291

  8. Optimizing Partial Credit Algorithms to Predict Student Performance

    ERIC Educational Resources Information Center

    Ostrow, Korinn; Donnelly, Chistopher; Heffernan, Neil

    2015-01-01

    As adaptive tutoring systems grow increasingly popular for the completion of classwork and homework, it is crucial to assess the manner in which students are scored within these platforms. The majority of systems, including ASSISTments, return the binary correctness of a student's first attempt at solving each problem. Yet for many teachers,…

  9. Performance Enhancing Diets and the PRISE Protocol to Optimize Athletic Performance

    PubMed Central

    Arciero, Paul J.; Ward, Emery

    2015-01-01

    The training regimens of modern-day athletes have evolved from the sole emphasis on a single fitness component (e.g., endurance athlete or resistance/strength athlete) to an integrative, multimode approach encompassing all four of the major fitness components: resistance (R), interval sprints (I), stretching (S), and endurance (E) training. Athletes rarely, if ever, focus their training on only one mode of exercise but instead routinely engage in a multimode training program. In addition, timed-daily protein (P) intake has become a hallmark for all athletes. Recent studies, including from our laboratory, have validated the effectiveness of this multimode paradigm (RISE) and protein-feeding regimen, which we have collectively termed PRISE. Unfortunately, sports nutrition recommendations and guidelines have lagged behind the PRISE integrative nutrition and training model and therefore limit an athletes' ability to succeed. Thus, it is the purpose of this review to provide a clearly defined roadmap linking specific performance enhancing diets (PEDs) with each PRISE component to facilitate optimal nourishment and ultimately optimal athletic performance. PMID:25949823

  10. Updating a finite element model to the real experimental setup by thermographic measurements and adaptive regression optimization

    NASA Astrophysics Data System (ADS)

    Peeters, J.; Arroud, G.; Ribbens, B.; Dirckx, J. J. J.; Steenackers, G.

    2015-12-01

    In non-destructive evaluation the use of finite element models to evaluate structural behavior and experimental setup optimization can complement with the inspector's experience. A new adaptive response surface methodology, especially adapted for thermal problems, is used to update the experimental setup parameters in a finite element model to the state of the test sample measured by pulsed thermography. Poly Vinyl Chloride (PVC) test samples are used to examine the results for thermal insulator models. A comparison of the achieved results is made by changing the target values from experimental pulsed thermography data to a fixed validation model. Several optimizers are compared and discussed with the focus on speed and accuracy. A time efficiency increase of over 20 and an accuracy of over 99.5% are achieved by the choice of the correct parameter sets and optimizer. Proper parameter set selection criteria are defined and the influence of the choice of the optimization algorithm and parameter set on the accuracy and convergence time are investigated.

  11. Design and adaptation of ocean observing systems at coastal scales, the role of data assimilation in the optimization of measures.

    NASA Astrophysics Data System (ADS)

    Brandini, Carlo; Taddei, Stefano; Fattorini, Maria; Doronzo, Bartolomeo; Lapucci, Chiara; Ortolani, Alberto; Poulain, Pierre Marie

    2015-04-01

    The design and the implementation of observation systems, in the current view, are not limited to the capability to observe some phenomena of particular interest in a given sea area, but must ensure maximum benefits to the analysis/prediction systems that are based on numerical models. The design of these experimental systems takes great advantage from the use of synthetic data, whose characteristics are as close as possible to the observed data (e.g. in-situ), in terms of spatial and temporal variability, particularly when the power spectrum of the observed signal is close to that reproduced by a numerical model. This method, usually referred to as OSSE (Observing System Simulation Experiment), is a preferred way to test numerical data for assimilation into models as if they were real data, with the advantage of defining different datasets for data assimilation at almost no cost. This applies both to the design of fixed networks (such as buoys or coastal radars), and to the improvement of the performance of mobile platforms, such as autonomous marine vehicles, floats or mobile radars, through the optimization of parameters for vehicle guidance, coverage, trajectories or localization of sampling points, according to the adaptive observation concept. In this work we present the results of some experimental activities recently undertaken in the coastal area between the Ligurian and Northern Tyrrhenian seas, that have shown a great vulnerability in recent years, due to a number of marine accidents and environmental issues. In this cross-border area an observation and forecasting system is being installed as part of the SICOMAR project (PO maritime Italy-France), in order to provide real time data at high spatial and time resolution, and to design interoperable, expandable and flexible observing platforms, that can be quickly adapted to the needs of local problems (e.g. accidents at sea). The starting SICOMAR network includes HF coastal radars, FerryBoxes onboard ships

  12. An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

    NASA Astrophysics Data System (ADS)

    Rashid, Kashif; Ambani, Saumil; Cetinkaya, Eren

    2013-02-01

    Many real-world optimization problems comprise objective functions that are based on the output of one or more simulation models. As these underlying processes can be time and computation intensive, the objective function is deemed expensive to evaluate. While methods to alleviate this cost in the optimization procedure have been explored previously, less attention has been given to the treatment of expensive constraints. This article presents a methodology for treating expensive simulation-based nonlinear constraints alongside an expensive simulation-based objective function using adaptive radial basis function techniques. Specifically, a multiquadric radial basis function approximation scheme is developed, together with a robust training method, to model not only the costly objective function, but also each expensive simulation-based constraint defined in the problem. The article presents the methodology developed for expensive nonlinear constrained optimization problems comprising both continuous and integer variables. Results from various test cases, both analytical and simulation-based, are presented.

  13. Adaptation of beef cattle to high-concentrate diets: performance and ruminal metabolism.

    PubMed

    Brown, M S; Ponce, C H; Pulikanti, R

    2006-04-01

    The diet adaptation period is widely considered a critical period of time in which nutritional management practices can promote or impair subsequent performance and health. Performance studies indicate that adapting feedlot cattle with incremental increases in dietary concentrate, from approximately 55 to 90% of diet DM, in 14 d or less, while allowing ad libitum access to the diet, generally results in reduced performance during adaptation or over the entire feeding period. However, the number of cattle involved in these studies does not allow insight into the frequencies of metabolic disorders associated with the management practices tested. Adapting cattle by restricting the quantity of higher-concentrate diets offered shows promise for improving production efficiency, but further development is needed for application in commercial feedlots. Evidence suggests considerable diversity in the ability of animals to cope with ingested cereal grain, and indicates that diet adaptation procedures should affect the frequency of health-impaired or low-performing cattle in a pen. Individuals that seem to effectively regulate voluntary feed intake during adaptation generally display a steady increase in DMI as dietary concentrate is increased. These data also highlight a seemingly counterproductive, repeating cycle of overconsumption, followed by a pronounced reduction in ruminal pH, by cattle that appear to cope less favorably with grain adaptation. At least a portion of this diversity may relate to the maintenance of protozoal populations. Increases in amylolytic bacteria seemed to follow the increment of additional concentrate. Protozoa were most numerous when the diet contained approximately 60% concentrate, and lactate-utilizing bacteria increased more markedly when the diet contained more than approximately 70% concentrate. Available in vivo data suggest that the number of lactate-utilizing bacteria may reach a plateau for a given diet composition after 2 to 7 d, but

  14. Crop planting date optimization: An approach for climate change adaptation in West Africa

    NASA Astrophysics Data System (ADS)

    Waongo, Moussa; Laux, Patrick; Kunstmann, Harald

    2014-05-01

    Agriculture is the main source of income for population and the main driver of economy in Africa, particularly in West Africa. West African agriculture is dominated by rainfed agriculture. This agricultural system is characterized by smallholder and subsistence farming, and a limited use of crop production inputs such as machines, fertilizers and pesticides. Therefore, crop yield is strongly influenced by climate fluctuation and is more vulnerable to climate change and climate variability. To reduce climate risk on crop production, a development of tailored agricultural management strategies is required. The usage of agricultural management strategies such as tailored crop planting date might contribute both to reduce crop failure and to increased crop production. In addition, unlike aforementioned crop production inputs, the usage of tailored planting dates is costless for farmers. Thus, efforts to improve crop production by optimizing crop planting date can contribute to alleviate food insecurity in West Africa, in the context of climate change. In this study, the process-based crop model GLAM (General Large Area Model for annual crop) in combination with a fuzzy logic approach for planting date have been coupled with a genetic algorithm to derive Optimized Planting Dates (OPDs) for maize cropping in Burkina Faso, West Africa. For a specific location, the derived OPDs correspond to a time window for crop planting. To analyze the performance of the OPDs approach, the derived OPDs has been compared to two well-known planting date methods in West Africa. The results showed a mean OPD ranging from May 1st (South-West) to July 11th (North) across the country. In comparison with well-known methods, the OPD approach yielded earliest planting dates across Burkina Faso. The deviation of OPDs from planting dates derived from the well known methods ranged from 10 days to 20 days for the northern and central region, and less than 10 days for the southern region. With respect

  15. Optimized performance for neutron interrogation to detect SNM

    SciTech Connect

    Slaughter, D R; Asztalos, S J; Biltoft, P J; Church, J A; Descalle, M; Hall, J M; Luu, T C; Manatt, D R; Mauger, G J; Norman, E B; Petersen, D C; Pruet, J A; Prussin, S G

    2007-02-14

    A program of simulations and validating experiments was utilized to evaluate a concept for neutron interrogation of commercial cargo containers that would reliably detect special nuclear material (SNM). The goals were to develop an interrogation system capable of detecting a 5 kg solid sphere of high-enriched uranium (HEU) even when deeply embedded in commercial cargo. Performance goals included a minimum detection probability, P{sub d} {ge} 95%, a maximum occurrence of false positive indications, P{sub fA} {le} 0.001, and maximum scan duration of t {le} 1 min. The conditions necessary to meet these goals were demonstrated in experimental measurements even when the SNM is deeply buried in any commercial cargo, and are projected to be met successfully in the most challenging cases of steel or hydrocarbons at areal density {rho}L {le} 150 g/cm{sup 2}. Optimal performance was obtained with a collimated ({Delta}{Theta} = {+-} 15{sup o}) neutron beam at energy E{sub n} = 7 MeV produced by the D(d,n) reaction with the deuteron energy E{sub d} = 4 MeV. Two fission product signatures are utilized to uniquely identify SNM, including delayed neutrons detected in a large array of polyethylene moderated 3He proportional counters and high energy {beta}-delayed fission product {gamma}-radiation detected in a large array of 61 x 61 x 25 cm{sup 3} plastic scintillators. The latter detectors are nearly blind to normal terrestrial background radiation by setting an energy threshold on the detection at E{sub min} {ge} 3 MeV. Detection goals were attained with a low beam current (I{sub d} = 15-65 {micro}A) source up to {rho}L = 75 g/cm{sup 2} utilizing long irradiations, T = 30 sec, and long counting times, t = 30-100 sec. Projecting to a higher beam current, I{sub d} {ge} 600 {micro}A and larger detector array the detection and false alarm goals would be attained even with intervening cargo overburden as large as {rho}L {le} 150 g/cm{sup 2}. The latter cargo thickness corresponds to

  16. An Agent-Based Optimization Framework for Engineered Complex Adaptive Systems with Application to Demand Response in Electricity Markets

    NASA Astrophysics Data System (ADS)

    Haghnevis, Moeed

    The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of engineered complex systems and predict their future adaptive patterns. The approach allows the examination of complexity in the structure and the behavior of components as a result of their connections and in relation to their environment. This research describes and uses the major differences of natural complex adaptive systems (CASs) with artificial/engineered CASs to build a framework and platform for ECAS. While this framework focuses on the critical factors of an engineered system, it also enables one to synthetically employ engineering and mathematical models to analyze and measure complexity in such systems. In this way concepts of complex systems science are adapted to management science and system of systems engineering. In particular an integrated consumer-based optimization and agent-based modeling (ABM) platform is presented that enables managers to predict and partially control patterns of behaviors in ECASs. Demonstrated on the U.S. electricity markets, ABM is integrated with normative and subjective decision behavior recommended by the U.S. Department of Energy (DOE) and Federal Energy Regulatory Commission (FERC). The approach integrates social networks, social science, complexity theory, and diffusion theory. Furthermore, it has unique and significant contribution in exploring and representing concrete managerial insights for ECASs and offering new optimized actions and modeling paradigms in agent-based simulation.

  17. Microbial community succession mechanism coupling with adaptive evolution of adsorption performance in chalcopyrite bioleaching.

    PubMed

    Feng, Shoushuai; Yang, Hailin; Wang, Wu

    2015-09-01

    The community succession mechanism of Acidithiobacillus sp. coupling with adaptive evolution of adsorption performance were systematically investigated. Specifically, the μmax of attached and free cells was increased and peak time was moved ahead, indicating both cell growth of Acidithiobacillus ferrooxidans and Acidithiobacillus thiooxidans was promoted. In the mixed strains system, the domination courses of A. thiooxidans was dramatically shortened from 22th day to 15th day, although community structure finally approached to the normal system. Compared to A. ferrooxidans, more positive effects of adaptive evolution on cell growth of A. thiooxidans were shown in either single or mixed strains system. Moreover, higher concentrations of sulfate and ferric ions indicated that both sulfur and iron metabolism was enhanced, especially of A. thiooxidans. Consistently, copper ion production was improved from 65.5 to 88.5 mg/L. This new adaptive evolution and community succession mechanism may be useful for guiding similar bioleaching processes. PMID:25978855

  18. Performance of a MEMS-base Adaptive Optics Optical Coherency Tomography System

    SciTech Connect

    Evans, J; Zadwadzki, R J; Jones, S; Olivier, S; Opkpodu, S; Werner, J S

    2008-01-16

    We have demonstrated that a microelectrical mechanical systems (MEMS) deformable mirror can be flattened to < 1 nm RMS within controllable spatial frequencies over a 9.2-mm aperture making it a viable option for high-contrast adaptive optics systems (also known as Extreme Adaptive Optics). The Extreme Adaptive Optics Testbed at UC Santa Cruz is being used to investigate and develop technologies for high-contrast imaging, especially wavefront control. A phase shifting diffraction interferometer (PSDI) measures wavefront errors with sub-nm precision and accuracy for metrology and wavefront control. Consistent flattening, required testing and characterization of the individual actuator response, including the effects of dead and low-response actuators. Stability and repeatability of the MEMS devices was also tested. An error budget for MEMS closed loop performance will summarize MEMS characterization.

  19. Extreme Adaptive Optics Testbed: Performance and Characterization of a 1024 Deformable Mirror

    SciTech Connect

    Evans, J W; Morzinski, K; Severson, S; Poyneer, L; Macintosh, B; Dillon, D; REza, L; Gavel, D; Palmer, D

    2005-10-30

    We have demonstrated that a microelectrical mechanical systems (MEMS) deformable mirror can be flattened to < 1 nm RMS within controllable spatial frequencies over a 9.2-mm aperture making it a viable option for high-contrast adaptive optics systems (also known as Extreme Adaptive Optics). The Extreme Adaptive Optics Testbed at UC Santa Cruz is being used to investigate and develop technologies for high-contrast imaging, especially wavefront control. A phase shifting diffraction interferometer (PSDI) measures wavefront errors with sub-nm precision and accuracy for metrology and wavefront control. Consistent flattening, required testing and characterization of the individual actuator response, including the effects of dead and low-response actuators. Stability and repeatability of the MEMS devices was also tested. An error budget for MEMS closed loop performance will summarize MEMS characterization.

  20. Asynchronous multilevel adaptive methods for solving partial differential equations on multiprocessors - Performance results

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.