Optimization of life support systems and their systems reliability
NASA Technical Reports Server (NTRS)
Fan, L. T.; Hwang, C. L.; Erickson, L. E.
1971-01-01
The identification, analysis, and optimization of life support systems and subsystems have been investigated. For each system or subsystem that has been considered, the procedure involves the establishment of a set of system equations (or mathematical model) based on theory and experimental evidences; the analysis and simulation of the model; the optimization of the operation, control, and reliability; analysis of sensitivity of the system based on the model; and, if possible, experimental verification of the theoretical and computational results. Research activities include: (1) modeling of air flow in a confined space; (2) review of several different gas-liquid contactors utilizing centrifugal force: (3) review of carbon dioxide reduction contactors in space vehicles and other enclosed structures: (4) application of modern optimal control theory to environmental control of confined spaces; (5) optimal control of class of nonlinear diffusional distributed parameter systems: (6) optimization of system reliability of life support systems and sub-systems: (7) modeling, simulation and optimal control of the human thermal system: and (8) analysis and optimization of the water-vapor eletrolysis cell.
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
1975-01-01
Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.
Reliability-based optimization of an active vibration controller using evolutionary algorithms
NASA Astrophysics Data System (ADS)
Saraygord Afshari, Sajad; Pourtakdoust, Seid H.
2017-04-01
Many modern industrialized systems such as aircrafts, rotating turbines, satellite booms, etc. cannot perform their desired tasks accurately if their uninhibited structural vibrations are not controlled properly. Structural health monitoring and online reliability calculations are emerging new means to handle system imposed uncertainties. As stochastic forcing are unavoidable, in most engineering systems, it is often needed to take them into the account for the control design process. In this research, smart material technology is utilized for structural health monitoring and control in order to keep the system in a reliable performance range. In this regard, a reliability-based cost function is assigned for both controller gain optimization as well as sensor placement. The proposed scheme is implemented and verified for a wing section. Comparison of results for the frequency responses is considered to show potential applicability of the presented technique.
Autonomous Energy Grids | Grid Modernization | NREL
control themselves using advanced machine learning and simulation to create resilient, reliable, and affordable optimized energy systems. Current frameworks to monitor, control, and optimize large-scale energy of optimization theory, control theory, big data analytics, and complex system theory and modeling to
Reliability-based structural optimization: A proposed analytical-experimental study
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson; Nikolaidis, Efstratios
1993-01-01
An analytical and experimental study for assessing the potential of reliability-based structural optimization is proposed and described. In the study, competing designs obtained by deterministic and reliability-based optimization are compared. The experimental portion of the study is practical because the structure selected is a modular, actively and passively controlled truss that consists of many identical members, and because the competing designs are compared in terms of their dynamic performance and are not destroyed if failure occurs. The analytical portion of this study is illustrated on a 10-bar truss example. In the illustrative example, it is shown that reliability-based optimization can yield a design that is superior to an alternative design obtained by deterministic optimization. These analytical results provide motivation for the proposed study, which is underway.
Reliability-Based Control Design for Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.
A reliable algorithm for optimal control synthesis
NASA Technical Reports Server (NTRS)
Vansteenwyk, Brett; Ly, Uy-Loi
1992-01-01
In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.
Reliable numerical computation in an optimal output-feedback design
NASA Technical Reports Server (NTRS)
Vansteenwyk, Brett; Ly, Uy-Loi
1991-01-01
A reliable algorithm is presented for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters. The algorithm is a part of a design algorithm for optimal linear dynamic output-feedback controller that minimizes a finite-time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control-law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed-loop eigensystem. This approach through the use of an accurate Pade series approximation does not require the closed-loop system matrix to be diagonalizable. The algorithm was included in a control design package for optimal robust low-order controllers. Usefulness of the proposed numerical algorithm was demonstrated using numerous practical design cases where degeneracies occur frequently in the closed-loop system under an arbitrary controller design initialization and during the numerical search.
Abedini, Mohammad; Moradi, Mohammad H; Hosseinian, S M
2016-03-01
This paper proposes a novel method to address reliability and technical problems of microgrids (MGs) based on designing a number of self-adequate autonomous sub-MGs via adopting MGs clustering thinking. In doing so, a multi-objective optimization problem is developed where power losses reduction, voltage profile improvement and reliability enhancement are considered as the objective functions. To solve the optimization problem a hybrid algorithm, named HS-GA, is provided, based on genetic and harmony search algorithms, and a load flow method is given to model different types of DGs as droop controller. The performance of the proposed method is evaluated in two case studies. The results provide support for the performance of the proposed method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Perceptual attraction in tool use: evidence for a reliability-based weighting mechanism.
Debats, Nienke B; Ernst, Marc O; Heuer, Herbert
2017-04-01
Humans are well able to operate tools whereby their hand movement is linked, via a kinematic transformation, to a spatially distant object moving in a separate plane of motion. An everyday example is controlling a cursor on a computer monitor. Despite these separate reference frames, the perceived positions of the hand and the object were found to be biased toward each other. We propose that this perceptual attraction is based on the principles by which the brain integrates redundant sensory information of single objects or events, known as optimal multisensory integration. That is, 1 ) sensory information about the hand and the tool are weighted according to their relative reliability (i.e., inverse variances), and 2 ) the unisensory reliabilities sum up in the integrated estimate. We assessed whether perceptual attraction is consistent with optimal multisensory integration model predictions. We used a cursor-control tool-use task in which we manipulated the relative reliability of the unisensory hand and cursor position estimates. The perceptual biases shifted according to these relative reliabilities, with an additional bias due to contextual factors that were present in experiment 1 but not in experiment 2 The biased position judgments' variances were, however, systematically larger than the predicted optimal variances. Our findings suggest that the perceptual attraction in tool use results from a reliability-based weighting mechanism similar to optimal multisensory integration, but that certain boundary conditions for optimality might not be satisfied. NEW & NOTEWORTHY Kinematic tool use is associated with a perceptual attraction between the spatially separated hand and the effective part of the tool. We provide a formal account for this phenomenon, thereby showing that the process behind it is similar to optimal integration of sensory information relating to single objects. Copyright © 2017 the American Physiological Society.
Honing process optimization algorithms
NASA Astrophysics Data System (ADS)
Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.
2018-03-01
This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.
Advanced CHP Control Algorithms: Scope Specification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katipamula, Srinivas; Brambley, Michael R.
2006-04-28
The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.
NASA Astrophysics Data System (ADS)
Gorzelic, P.; Schiff, S. J.; Sinha, A.
2013-04-01
Objective. To explore the use of classical feedback control methods to achieve an improved deep brain stimulation (DBS) algorithm for application to Parkinson's disease (PD). Approach. A computational model of PD dynamics was employed to develop model-based rational feedback controller design. The restoration of thalamocortical relay capabilities to patients suffering from PD is formulated as a feedback control problem with the DBS waveform serving as the control input. Two high-level control strategies are tested: one that is driven by an online estimate of thalamic reliability, and another that acts to eliminate substantial decreases in the inhibition from the globus pallidus interna (GPi) to the thalamus. Control laws inspired by traditional proportional-integral-derivative (PID) methodology are prescribed for each strategy and simulated on this computational model of the basal ganglia network. Main Results. For control based upon thalamic reliability, a strategy of frequency proportional control with proportional bias delivered the optimal control achieved for a given energy expenditure. In comparison, control based upon synaptic inhibitory output from the GPi performed very well in comparison with those of reliability-based control, with considerable further reduction in energy expenditure relative to that of open-loop DBS. The best controller performance was amplitude proportional with derivative control and integral bias, which is full PID control. We demonstrated how optimizing the three components of PID control is feasible in this setting, although the complexity of these optimization functions argues for adaptive methods in implementation. Significance. Our findings point to the potential value of model-based rational design of feedback controllers for Parkinson's disease.
Gorzelic, P; Schiff, S J; Sinha, A
2013-04-01
To explore the use of classical feedback control methods to achieve an improved deep brain stimulation (DBS) algorithm for application to Parkinson's disease (PD). A computational model of PD dynamics was employed to develop model-based rational feedback controller design. The restoration of thalamocortical relay capabilities to patients suffering from PD is formulated as a feedback control problem with the DBS waveform serving as the control input. Two high-level control strategies are tested: one that is driven by an online estimate of thalamic reliability, and another that acts to eliminate substantial decreases in the inhibition from the globus pallidus interna (GPi) to the thalamus. Control laws inspired by traditional proportional-integral-derivative (PID) methodology are prescribed for each strategy and simulated on this computational model of the basal ganglia network. For control based upon thalamic reliability, a strategy of frequency proportional control with proportional bias delivered the optimal control achieved for a given energy expenditure. In comparison, control based upon synaptic inhibitory output from the GPi performed very well in comparison with those of reliability-based control, with considerable further reduction in energy expenditure relative to that of open-loop DBS. The best controller performance was amplitude proportional with derivative control and integral bias, which is full PID control. We demonstrated how optimizing the three components of PID control is feasible in this setting, although the complexity of these optimization functions argues for adaptive methods in implementation. Our findings point to the potential value of model-based rational design of feedback controllers for Parkinson's disease.
NASA Astrophysics Data System (ADS)
Zhu, Z. W.; Zhang, W. D.; Xu, J.
2014-03-01
The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposed in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.
Robust Control Design for Systems With Probabilistic Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a reliability- and robustness-based formulation for robust control synthesis for systems with probabilistic uncertainty. In a reliability-based formulation, the probability of violating design requirements prescribed by inequality constraints is minimized. In a robustness-based formulation, a metric which measures the tendency of a random variable/process to cluster close to a target scalar/function is minimized. A multi-objective optimization procedure, which combines stability and performance requirements in time and frequency domains, is used to search for robustly optimal compensators. Some of the fundamental differences between the proposed strategy and conventional robust control methods are: (i) unnecessary conservatism is eliminated since there is not need for convex supports, (ii) the most likely plants are favored during synthesis allowing for probabilistic robust optimality, (iii) the tradeoff between robust stability and robust performance can be explored numerically, (iv) the uncertainty set is closely related to parameters with clear physical meaning, and (v) compensators with improved robust characteristics for a given control structure can be synthesized.
Chen, Xi; Xu, Yixuan; Liu, Anfeng
2017-04-19
High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%.
Chen, Xi; Xu, Yixuan; Liu, Anfeng
2017-01-01
High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs). However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%. PMID:28422062
Optimal Discrete Event Supervisory Control of Aircraft Gas Turbine Engines
NASA Technical Reports Server (NTRS)
Litt, Jonathan (Technical Monitor); Ray, Asok
2004-01-01
This report presents an application of the recently developed theory of optimal Discrete Event Supervisory (DES) control that is based on a signed real measure of regular languages. The DES control techniques are validated on an aircraft gas turbine engine simulation test bed. The test bed is implemented on a networked computer system in which two computers operate in the client-server mode. Several DES controllers have been tested for engine performance and reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Z. W., E-mail: zhuzhiwen@tju.edu.cn; Tianjin Key Laboratory of Non-linear Dynamics and Chaos Control, 300072, Tianjin; Zhang, W. D., E-mail: zhangwenditju@126.com
2014-03-15
The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposedmore » in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.« less
Multivariable optimization of liquid rocket engines using particle swarm algorithms
NASA Astrophysics Data System (ADS)
Jones, Daniel Ray
Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.
Optimal Management of Redundant Control Authority for Fault Tolerance
NASA Technical Reports Server (NTRS)
Wu, N. Eva; Ju, Jianhong
2000-01-01
This paper is intended to demonstrate the feasibility of a solution to a fault tolerant control problem. It explains, through a numerical example, the design and the operation of a novel scheme for fault tolerant control. The fundamental principle of the scheme was formalized in [5] based on the notion of normalized nonspecificity. The novelty lies with the use of a reliability criterion for redundancy management, and therefore leads to a high overall system reliability.
NASA Astrophysics Data System (ADS)
Martowicz, Adam; Uhl, Tadeusz
2012-10-01
The paper discusses the applicability of a reliability- and performance-based multi-criteria robust design optimization technique for micro-electromechanical systems, considering their technological uncertainties. Nowadays, micro-devices are commonly applied systems, especially in the automotive industry, taking advantage of utilizing both the mechanical structure and electronic control circuit on one board. Their frequent use motivates the elaboration of virtual prototyping tools that can be applied in design optimization with the introduction of technological uncertainties and reliability. The authors present a procedure for the optimization of micro-devices, which is based on the theory of reliability-based robust design optimization. This takes into consideration the performance of a micro-device and its reliability assessed by means of uncertainty analysis. The procedure assumes that, for each checked design configuration, the assessment of uncertainty propagation is performed with the meta-modeling technique. The described procedure is illustrated with an example of the optimization carried out for a finite element model of a micro-mirror. The multi-physics approach allowed the introduction of several physical phenomena to correctly model the electrostatic actuation and the squeezing effect present between electrodes. The optimization was preceded by sensitivity analysis to establish the design and uncertain domains. The genetic algorithms fulfilled the defined optimization task effectively. The best discovered individuals are characterized by a minimized value of the multi-criteria objective function, simultaneously satisfying the constraint on material strength. The restriction of the maximum equivalent stresses was introduced with the conditionally formulated objective function with a penalty component. The yielded results were successfully verified with a global uniform search through the input design domain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Xiangqi; Zhang, Yingchen
This paper presents an optimal voltage control methodology with coordination among different voltage-regulating resources, including controllable loads, distributed energy resources such as energy storage and photovoltaics (PV), and utility voltage-regulating devices such as voltage regulators and capacitors. The proposed methodology could effectively tackle the overvoltage and voltage regulation device distortion problems brought by high penetrations of PV to improve grid operation reliability. A voltage-load sensitivity matrix and voltage-regulator sensitivity matrix are used to deploy the resources along the feeder to achieve the control objectives. Mixed-integer nonlinear programming is used to solve the formulated optimization control problem. The methodology has beenmore » tested on the IEEE 123-feeder test system, and the results demonstrate that the proposed approach could actively tackle the voltage problem brought about by high penetrations of PV and improve the reliability of distribution system operation.« less
Directions in propulsion control
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.
1990-01-01
Discussed here is research at NASA Lewis in the area of propulsion controls as driven by trends in advanced aircraft. The objective of the Lewis program is to develop the technology for advanced reliable propulsion control systems and to integrate the propulsion control with the flight control for optimal full-system control.
Autonomous Control Modes and Optimized Path Guidance for Shipboard Landing in High Sea States
2017-04-15
50 0 50 Singular Values Frequency (rad/s) S in g u la r V a lu e s ( d B ) controller . The non -output variables can be estimated by reliable linear...Contract # N00014-14-C-0004 Autonomous Control Modes and Optimized Path Guidance for Shipboard Landing in High Sea States Progress Report...recovery of a VTOL UAV. There is a clear need for additional levels of stability and control augmentation and, ultimately, fully autonomous landing
Reliability of Fault Tolerant Control Systems. Part 2
NASA Technical Reports Server (NTRS)
Wu, N. Eva
2000-01-01
This paper reports Part II of a two part effort that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability properties peculiar to fault-tolerant control systems are emphasized, such as the presence of analytic redundancy in high proportion, the dependence of failures on control performance, and high risks associated with decisions in redundancy management due to multiple sources of uncertainties and sometimes large processing requirements. As a consequence, coverage of failures through redundancy management can be severely limited. The paper proposes to formulate the fault tolerant control problem as an optimization problem that maximizes coverage of failures through redundancy management. Coverage modeling is attempted in a way that captures its dependence on the control performance and on the diagnostic resolution. Under the proposed redundancy management policy, it is shown that an enhanced overall system reliability can be achieved with a control law of a superior robustness, with an estimator of a higher resolution, and with a control performance requirement of a lesser stringency.
NASA Astrophysics Data System (ADS)
Momoh, James A.; Salkuti, Surender Reddy
2016-06-01
This paper proposes a stochastic optimization technique for solving the Voltage/VAr control problem including the load demand and Renewable Energy Resources (RERs) variation. The RERs often take along some inputs like stochastic behavior. One of the important challenges i. e., Voltage/VAr control is a prime source for handling power system complexity and reliability, hence it is the fundamental requirement for all the utility companies. There is a need for the robust and efficient Voltage/VAr optimization technique to meet the peak demand and reduction of system losses. The voltages beyond the limit may damage costly sub-station devices and equipments at consumer end as well. Especially, the RERs introduces more disturbances and some of the RERs are not even capable enough to meet the VAr demand. Therefore, there is a strong need for the Voltage/VAr control in RERs environment. This paper aims at the development of optimal scheme for Voltage/VAr control involving RERs. In this paper, Latin Hypercube Sampling (LHS) method is used to cover full range of variables by maximally satisfying the marginal distribution. Here, backward scenario reduction technique is used to reduce the number of scenarios effectively and maximally retain the fitting accuracy of samples. The developed optimization scheme is tested on IEEE 24 bus Reliability Test System (RTS) considering the load demand and RERs variation.
Removing Barriers for Effective Deployment of Intermittent Renewable Generation
NASA Astrophysics Data System (ADS)
Arabali, Amirsaman
The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.
Optimal Implementations for Reliable Circadian Clocks
NASA Astrophysics Data System (ADS)
Hasegawa, Yoshihiko; Arita, Masanori
2014-09-01
Circadian rhythms are acquired through evolution to increase the chances for survival through synchronizing with the daylight cycle. Reliable synchronization is realized through two trade-off properties: regularity to keep time precisely, and entrainability to synchronize the internal time with daylight. We find by using a phase model with multiple inputs that achieving the maximal limit of regularity and entrainability entails many inherent features of the circadian mechanism. At the molecular level, we demonstrate the role sharing of two light inputs, phase advance and delay, as is well observed in mammals. At the behavioral level, the optimal phase-response curve inevitably contains a dead zone, a time during which light pulses neither advance nor delay the clock. We reproduce the results of phase-controlling experiments entrained by two types of periodic light pulses. Our results indicate that circadian clocks are designed optimally for reliable clockwork through evolution.
Microgrids and distributed generation systems: Control, operation, coordination and planning
NASA Astrophysics Data System (ADS)
Che, Liang
Distributed Energy Resources (DERs) which include distributed generations (DGs), distributed energy storage systems, and adjustable loads are key components in microgrid operations. A microgrid is a small electric power system integrated with on-site DERs to serve all or some portion of the local load and connected to the utility grid through the point of common coupling (PCC). Microgrids can operate in both grid-connected mode and island mode. The structure and components of hierarchical control for a microgrid at Illinois Institute of Technology (IIT) are discussed and analyzed. Case studies would address the reliable and economic operation of IIT microgrid. The simulation results of IIT microgrid operation demonstrate that the hierarchical control and the coordination strategy of distributed energy resources (DERs) is an effective way of optimizing the economic operation and the reliability of microgrids. The benefits and challenges of DC microgrids are addressed with a DC model for the IIT microgrid. We presented the hierarchical control strategy including the primary, secondary, and tertiary controls for economic operation and the resilience of a DC microgrid. The simulation results verify that the proposed coordinated strategy is an effective way of ensuring the resilient response of DC microgrids to emergencies and optimizing their economic operation at steady state. The concept and prototype of a community microgrid that interconnecting multiple microgrids in a community are proposed. Two works are conducted. For the coordination, novel three-level hierarchical coordination strategy to coordinate the optimal power exchanges among neighboring microgrids is proposed. For the planning, a multi-microgrid interconnection planning framework using probabilistic minimal cut-set (MCS) based iterative methodology is proposed for enhancing the economic, resilience, and reliability signals in multi-microgrid operations. The implementation of high-reliability microgrids requires proper protection schemes that effectively function in both grid-connected and island modes. This chapter presents a communication-assisted four-level hierarchical protection strategy for high-reliability microgrids, and tests the proposed protection strategy based on a loop structured microgrid. The simulation results demonstrate the proposed strategy to be an effective and efficient option for microgrid protection. Additionally, microgrid topology ought to be optimally planned. To address the microgrid topology planning, a graph-partitioning and integer-programming integrated methodology is proposed. This work is not included in the dissertation. Interested readers can refer to our related publication.
Reliability based design optimization: Formulations and methodologies
NASA Astrophysics Data System (ADS)
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.
Electric machine differential for vehicle traction control and stability control
NASA Astrophysics Data System (ADS)
Kuruppu, Sandun Shivantha
Evolving requirements in energy efficiency and tightening regulations for reliable electric drivetrains drive the advancement of the hybrid electric (HEV) and full electric vehicle (EV) technology. Different configurations of EV and HEV architectures are evaluated for their performance. The future technology is trending towards utilizing distinctive properties in electric machines to not only to improve efficiency but also to realize advanced road adhesion controls and vehicle stability controls. Electric machine differential (EMD) is such a concept under current investigation for applications in the near future. Reliability of a power train is critical. Therefore, sophisticated fault detection schemes are essential in guaranteeing reliable operation of a complex system such as an EMD. The research presented here emphasize on implementation of a 4kW electric machine differential, a novel single open phase fault diagnostic scheme, an implementation of a real time slip optimization algorithm and an electric machine differential based yaw stability improvement study. The proposed d-q current signature based SPO fault diagnostic algorithm detects the fault within one electrical cycle. The EMD based extremum seeking slip optimization algorithm reduces stopping distance by 30% compared to hydraulic braking based ABS.
Aircraft Engine Thrust Estimator Design Based on GSA-LSSVM
NASA Astrophysics Data System (ADS)
Sheng, Hanlin; Zhang, Tianhong
2017-08-01
In view of the necessity of highly precise and reliable thrust estimator to achieve direct thrust control of aircraft engine, based on support vector regression (SVR), as well as least square support vector machine (LSSVM) and a new optimization algorithm - gravitational search algorithm (GSA), by performing integrated modelling and parameter optimization, a GSA-LSSVM-based thrust estimator design solution is proposed. The results show that compared to particle swarm optimization (PSO) algorithm, GSA can find unknown optimization parameter better and enables the model developed with better prediction and generalization ability. The model can better predict aircraft engine thrust and thus fulfills the need of direct thrust control of aircraft engine.
Optimal blood glucose level control using dynamic programming based on minimal Bergman model
NASA Astrophysics Data System (ADS)
Rettian Anggita Sari, Maria; Hartono
2018-03-01
The purpose of this article is to simulate the glucose dynamic and the insulin kinetic of diabetic patient. The model used in this research is a non-linear Minimal Bergman model. Optimal control theory is then applied to formulate the problem in order to determine the optimal dose of insulin in the treatment of diabetes mellitus such that the glucose level is in the normal range for some specific time range. The optimization problem is solved using dynamic programming. The result shows that dynamic programming is quite reliable to represent the interaction between glucose and insulin levels in diabetes mellitus patient.
ERIC Educational Resources Information Center
Byars, Alvin Gregg
The objectives of this investigation are to develop, describe, assess, and demonstrate procedures for constructing mastery tests to minimize errors of classification and to maximize decision reliability. The guidelines are based on conditions where item exchangeability is a reasonable assumption and the test constructor can control the number of…
Control strategies for planetary rover motion and manipulator control
NASA Technical Reports Server (NTRS)
Trautwein, W.
1973-01-01
An unusual insect-like vehicle designed for planetary surface exploration is made the occasion for a discussion of control concepts in path selection, hazard detection, obstacle negotiation, and soil sampling. A control scheme which actively articulates the pitching motion between a single-loop front module and a dual loop rear module leads to near optimal behavior in soft soil; at the same time the vehicle's front module acts as a reliable tactile forward probe with a detection range much longer than the stopping distance. Some optimal control strategies are discussed, and the photos of a working scale model are displayed.
System principles, mathematical models and methods to ensure high reliability of safety systems
NASA Astrophysics Data System (ADS)
Zaslavskyi, V.
2017-04-01
Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.
A homotopy algorithm for digital optimal projection control GASD-HADOC
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G., Jr.; Richter, Stephen; Davis, Lawrence D.
1993-01-01
The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design.
Optimization of Adaptive Intraply Hybrid Fiber Composites with Reliability Considerations
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1994-01-01
The reliability with bounded distribution parameters (mean, standard deviation) was maximized and the reliability-based cost was minimized for adaptive intra-ply hybrid fiber composites by using a probabilistic method. The probabilistic method accounts for all naturally occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry, and control-related parameters. Probabilistic sensitivity factors were computed and used in the optimization procedures. For actuated change in the angle of attack of an airfoil-like composite shell structure with an adaptive torque plate, the reliability was maximized to 0.9999 probability, with constraints on the mean and standard deviation of the actuation material volume ratio (percentage of actuation composite material in a ply) and the actuation strain coefficient. The reliability-based cost was minimized for an airfoil-like composite shell structure with an adaptive skin and a mean actuation material volume ratio as the design parameter. At a O.9-mean actuation material volume ratio, the minimum cost was obtained.
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
Reliability and availability evaluation of Wireless Sensor Networks for industrial applications.
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements.
Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2001-01-01
This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.
Accounting for Proof Test Data in a Reliability Based Design Optimization Framework
NASA Technical Reports Server (NTRS)
Ventor, Gerharad; Scotti, Stephen J.
2012-01-01
This paper investigates the use of proof (or acceptance) test data during the reliability based design optimization of structural components. It is assumed that every component will be proof tested and that the component will only enter into service if it passes the proof test. The goal is to reduce the component weight, while maintaining high reliability, by exploiting the proof test results during the design process. The proposed procedure results in the simultaneous design of the structural component and the proof test itself and provides the designer with direct control over the probability of failing the proof test. The procedure is illustrated using two analytical example problems and the results indicate that significant weight savings are possible when exploiting the proof test results during the design process.
Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission
NASA Astrophysics Data System (ADS)
Huang, Yuechen; Li, Haiyang
2018-06-01
This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.
NASA Technical Reports Server (NTRS)
Leonard, Michael W.
2013-01-01
Integration of the Control Allocation technique to recover from Pilot Induced Oscillations (CAPIO) System into the control system of a Short Takeoff and Landing Mobility Concept Vehicle simulation presents a challenge because the CAPIO formulation requires that constrained optimization problems be solved at the controller operating frequency. We present a solution that utilizes a modified version of the well-known L-BFGS-B solver. Despite the iterative nature of the solver, the method is seen to converge in real time with sufficient reliability to support three weeks of piloted runs at the NASA Ames Vertical Motion Simulator (VMS) facility. The results of the optimization are seen to be excellent in the vast majority of real-time frames. Deficiencies in the quality of the results in some frames are shown to be improvable with simple termination criteria adjustments, though more real-time optimization iterations would be required.
Zhou, Xiangyang; Zhao, Beilei; Gong, Guohao
2015-08-14
This paper presents a method based on co-simulation of a mechatronic system to optimize the control parameters of a two-axis inertially stabilized platform system (ISP) applied in an unmanned airship (UA), by which high control performance and reliability of the ISP system are achieved. First, a three-dimensional structural model of the ISP is built by using the three-dimensional parametric CAD software SOLIDWORKS(®); then, to analyze the system's kinematic and dynamic characteristics under operating conditions, dynamics modeling is conducted by using the multi-body dynamics software ADAMS™, thus the main dynamic parameters such as displacement, velocity, acceleration and reaction curve are obtained, respectively, through simulation analysis. Then, those dynamic parameters were input into the established MATLAB(®) SIMULINK(®) controller to simulate and test the performance of the control system. By these means, the ISP control parameters are optimized. To verify the methods, experiments were carried out by applying the optimized parameters to the control system of a two-axis ISP. The results show that the co-simulation by using virtual prototyping (VP) is effective to obtain optimized ISP control parameters, eventually leading to high ISP control performance.
Zhou, Xiangyang; Zhao, Beilei; Gong, Guohao
2015-01-01
This paper presents a method based on co-simulation of a mechatronic system to optimize the control parameters of a two-axis inertially stabilized platform system (ISP) applied in an unmanned airship (UA), by which high control performance and reliability of the ISP system are achieved. First, a three-dimensional structural model of the ISP is built by using the three-dimensional parametric CAD software SOLIDWORKS®; then, to analyze the system’s kinematic and dynamic characteristics under operating conditions, dynamics modeling is conducted by using the multi-body dynamics software ADAMS™, thus the main dynamic parameters such as displacement, velocity, acceleration and reaction curve are obtained, respectively, through simulation analysis. Then, those dynamic parameters were input into the established MATLAB® SIMULINK® controller to simulate and test the performance of the control system. By these means, the ISP control parameters are optimized. To verify the methods, experiments were carried out by applying the optimized parameters to the control system of a two-axis ISP. The results show that the co-simulation by using virtual prototyping (VP) is effective to obtain optimized ISP control parameters, eventually leading to high ISP control performance. PMID:26287210
Advanced rotorcraft control using parameter optimization
NASA Technical Reports Server (NTRS)
Vansteenwyk, Brett; Ly, Uy-Loi
1991-01-01
A reliable algorithm for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters is presented. The algorithm is part of a design algorithm for an optimal linear dynamic output feedback controller that minimizes a finite time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed loop eigensystem. This approach through the use of a accurate Pade series approximation does not require the closed loop system matrix to be diagonalizable. The algorithm has been included in a control design package for optimal robust low order controllers. Usefulness of the proposed numerical algorithm has been demonstrated using numerous practical design cases where degeneracies occur frequently in the closed loop system under an arbitrary controller design initialization and during the numerical search.
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
This paper presents a study on the optimization of systems with structured uncertainties, whose inputs and outputs can be exhaustively described in the probabilistic sense. By propagating the uncertainty from the input to the output in the space of the probability density functions and the moments, optimization problems that pursue performance, robustness and reliability based designs are studied. Be specifying the desired outputs in terms of desired probability density functions and then in terms of meaningful probabilistic indices, we settle a computationally viable framework for solving practical optimization problems. Applications to static optimization and stability control are used to illustrate the relevance of incorporating uncertainty in the early stages of the design. Several examples that admit a full probabilistic description of the output in terms of the design variables and the uncertain inputs are used to elucidate the main features of the generic problem and its solution. Extensions to problems that do not admit closed form solutions are also evaluated. Concrete evidence of the importance of using a consistent probabilistic formulation of the optimization problem and a meaningful probabilistic description of its solution is provided in the examples. In the stability control problem the analysis shows that standard deterministic approaches lead to designs with high probability of running into instability. The implementation of such designs can indeed have catastrophic consequences.
A reliable data collection/control system
NASA Technical Reports Server (NTRS)
Maughan, Thom
1988-01-01
The Cal Poly Space Project requires a data collection/control system which must be able to reliably record temperature, pressure and vibration data. It must also schedule the 16 electroplating and 2 immiscible alloy experiments so as to optimize use of the batteries, maintain a safe package temperature profile, and run the experiment during conditions of microgravity (and minimum vibration). This system must operate unattended in the harsh environment of space and consume very little power due to limited battery supply. The design of a system which meets these requirements is addressed.
Optimal control of epidemic information dissemination over networks.
Chen, Pin-Yu; Cheng, Shin-Ming; Chen, Kwang-Cheng
2014-12-01
Information dissemination control is of crucial importance to facilitate reliable and efficient data delivery, especially in networks consisting of time-varying links or heterogeneous links. Since the abstraction of information dissemination much resembles the spread of epidemics, epidemic models are utilized to characterize the collective dynamics of information dissemination over networks. From a systematic point of view, we aim to explore the optimal control policy for information dissemination given that the control capability is a function of its distribution time, which is a more realistic model in many applications. The main contributions of this paper are to provide an analytically tractable model for information dissemination over networks, to solve the optimal control signal distribution time for minimizing the accumulated network cost via dynamic programming, and to establish a parametric plug-in model for information dissemination control. In particular, we evaluate its performance in mobile and generalized social networks as typical examples.
Total systems design analysis of high performance structures
NASA Technical Reports Server (NTRS)
Verderaime, V.
1993-01-01
Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.
Optimal Reservoir Operation using Stochastic Model Predictive Control
NASA Astrophysics Data System (ADS)
Sahu, R.; McLaughlin, D.
2016-12-01
Hydropower operations are typically designed to fulfill contracts negotiated with consumers who need reliable energy supplies, despite uncertainties in reservoir inflows. In addition to providing reliable power the reservoir operator needs to take into account environmental factors such as downstream flooding or compliance with minimum flow requirements. From a dynamical systems perspective, the reservoir operating strategy must cope with conflicting objectives in the presence of random disturbances. In order to achieve optimal performance, the reservoir system needs to continually adapt to disturbances in real time. Model Predictive Control (MPC) is a real-time control technique that adapts by deriving the reservoir release at each decision time from the current state of the system. Here an ensemble-based version of MPC (SMPC) is applied to a generic reservoir to determine both the optimal power contract, considering future inflow uncertainty, and a real-time operating strategy that attempts to satisfy the contract. Contract selection and real-time operation are coupled in an optimization framework that also defines a Pareto trade off between the revenue generated from energy production and the environmental damage resulting from uncontrolled reservoir spills. Further insight is provided by a sensitivity analysis of key parameters specified in the SMPC technique. The results demonstrate that SMPC is suitable for multi-objective planning and associated real-time operation of a wide range of hydropower reservoir systems.
Real-Time Optimal Flood Control Decision Making and Risk Propagation Under Multiple Uncertainties
NASA Astrophysics Data System (ADS)
Zhu, Feilin; Zhong, Ping-An; Sun, Yimeng; Yeh, William W.-G.
2017-12-01
Multiple uncertainties exist in the optimal flood control decision-making process, presenting risks involving flood control decisions. This paper defines the main steps in optimal flood control decision making that constitute the Forecast-Optimization-Decision Making (FODM) chain. We propose a framework for supporting optimal flood control decision making under multiple uncertainties and evaluate risk propagation along the FODM chain from a holistic perspective. To deal with uncertainties, we employ stochastic models at each link of the FODM chain. We generate synthetic ensemble flood forecasts via the martingale model of forecast evolution. We then establish a multiobjective stochastic programming with recourse model for optimal flood control operation. The Pareto front under uncertainty is derived via the constraint method coupled with a two-step process. We propose a novel SMAA-TOPSIS model for stochastic multicriteria decision making. Then we propose the risk assessment model, the risk of decision-making errors and rank uncertainty degree to quantify the risk propagation process along the FODM chain. We conduct numerical experiments to investigate the effects of flood forecast uncertainty on optimal flood control decision making and risk propagation. We apply the proposed methodology to a flood control system in the Daduhe River basin in China. The results indicate that the proposed method can provide valuable risk information in each link of the FODM chain and enable risk-informed decisions with higher reliability.
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
Yeo, Sang-Hoon; Franklin, David W; Wolpert, Daniel M
2016-12-01
Movement planning is thought to be primarily determined by motor costs such as inaccuracy and effort. Solving for the optimal plan that minimizes these costs typically leads to specifying a time-varying feedback controller which both generates the movement and can optimally correct for errors that arise within a movement. However, the quality of the sensory feedback during a movement can depend substantially on the generated movement. We show that by incorporating such state-dependent sensory feedback, the optimal solution incorporates active sensing and is no longer a pure feedback process but includes a significant feedforward component. To examine whether people take into account such state-dependency in sensory feedback we asked people to make movements in which we controlled the reliability of sensory feedback. We made the visibility of the hand state-dependent, such that the visibility was proportional to the component of hand velocity in a particular direction. Subjects gradually adapted to such a sensory perturbation by making curved hand movements. In particular, they appeared to control the late visibility of the movement matching predictions of the optimal controller with state-dependent sensory noise. Our results show that trajectory planning is not only sensitive to motor costs but takes sensory costs into account and argues for optimal control of movement in which feedforward commands can play a significant role.
Model Predictive Control-based Optimal Coordination of Distributed Energy Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming
2013-01-07
Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less
Model Predictive Control-based Optimal Coordination of Distributed Energy Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming
2013-04-03
Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less
Reliability Analysis for AFTI-F16 SRFCS Using ASSIST and SURE
NASA Technical Reports Server (NTRS)
Wu, N. Eva
2001-01-01
This paper reports the results of a study on reliability analysis of an AFTI-16 Self-Repairing Flight Control System (SRFCS) using software tools SURE (Semi-Markov Unreliability Range Evaluator and ASSIST (Abstract Semi-Markov Specification Interface to the SURE Tool). The purpose of the study is to investigate the potential utility of the software tools in the ongoing effort of the NASA Aviation Safety Program, where the class of systems must be extended beyond the originally intended serving class of electronic digital processors. The study concludes that SURE and ASSIST are applicable to reliability, analysis of flight control systems. They are especially efficient for sensitivity analysis that quantifies the dependence of system reliability on model parameters. The study also confirms an earlier finding on the dominant role of a parameter called a failure coverage. The paper will remark on issues related to the improvement of coverage and the optimization of redundancy level.
Optimal orientation in flows: providing a benchmark for animal movement strategies.
McLaren, James D; Shamoun-Baranes, Judy; Dokter, Adriaan M; Klaassen, Raymond H G; Bouten, Willem
2014-10-06
Animal movements in air and water can be strongly affected by experienced flow. While various flow-orientation strategies have been proposed and observed, their performance in variable flow conditions remains unclear. We apply control theory to establish a benchmark for time-minimizing (optimal) orientation. We then define optimal orientation for movement in steady flow patterns and, using dynamic wind data, for short-distance mass movements of thrushes (Turdus sp.) and 6000 km non-stop migratory flights by great snipes, Gallinago media. Relative to the optimal benchmark, we assess the efficiency (travel speed) and reliability (success rate) of three generic orientation strategies: full compensation for lateral drift, vector orientation (single-heading movement) and goal orientation (continually heading towards the goal). Optimal orientation is characterized by detours to regions of high flow support, especially when flow speeds approach and exceed the animal's self-propelled speed. In strong predictable flow (short distance thrush flights), vector orientation adjusted to flow on departure is nearly optimal, whereas for unpredictable flow (inter-continental snipe flights), only goal orientation was near-optimally reliable and efficient. Optimal orientation provides a benchmark for assessing efficiency of responses to complex flow conditions, thereby offering insight into adaptive flow-orientation across taxa in the light of flow strength, predictability and navigation capacity.
Optimal orientation in flows: providing a benchmark for animal movement strategies
McLaren, James D.; Shamoun-Baranes, Judy; Dokter, Adriaan M.; Klaassen, Raymond H. G.; Bouten, Willem
2014-01-01
Animal movements in air and water can be strongly affected by experienced flow. While various flow-orientation strategies have been proposed and observed, their performance in variable flow conditions remains unclear. We apply control theory to establish a benchmark for time-minimizing (optimal) orientation. We then define optimal orientation for movement in steady flow patterns and, using dynamic wind data, for short-distance mass movements of thrushes (Turdus sp.) and 6000 km non-stop migratory flights by great snipes, Gallinago media. Relative to the optimal benchmark, we assess the efficiency (travel speed) and reliability (success rate) of three generic orientation strategies: full compensation for lateral drift, vector orientation (single-heading movement) and goal orientation (continually heading towards the goal). Optimal orientation is characterized by detours to regions of high flow support, especially when flow speeds approach and exceed the animal's self-propelled speed. In strong predictable flow (short distance thrush flights), vector orientation adjusted to flow on departure is nearly optimal, whereas for unpredictable flow (inter-continental snipe flights), only goal orientation was near-optimally reliable and efficient. Optimal orientation provides a benchmark for assessing efficiency of responses to complex flow conditions, thereby offering insight into adaptive flow-orientation across taxa in the light of flow strength, predictability and navigation capacity. PMID:25056213
Autonomous Energy Grids: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroposki, Benjamin D; Dall-Anese, Emiliano; Bernstein, Andrey
With much higher levels of distributed energy resources - variable generation, energy storage, and controllable loads just to mention a few - being deployed into power systems, the data deluge from pervasive metering of energy grids, and the shaping of multi-level ancillary-service markets, current frameworks to monitoring, controlling, and optimizing large-scale energy systems are becoming increasingly inadequate. This position paper outlines the concept of 'Autonomous Energy Grids' (AEGs) - systems that are supported by a scalable, reconfigurable, and self-organizing information and control infrastructure, can be extremely secure and resilient (self-healing), and self-optimize themselves in real-time for economic and reliable performancemore » while systematically integrating energy in all forms. AEGs rely on scalable, self-configuring cellular building blocks that ensure that each 'cell' can self-optimize when isolated from a larger grid as well as partaking in the optimal operation of a larger grid when interconnected. To realize this vision, this paper describes the concepts and key research directions in the broad domains of optimization theory, control theory, big-data analytics, and complex system modeling that will be necessary to realize the AEG vision.« less
NASA Astrophysics Data System (ADS)
Pinson, Robin Marie
Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant (fuel) optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from ground control. The goal is to autonomously design the optimal powered descent trajectory onboard the spacecraft immediately prior to the descent burn for use during the burn. Compared to a planetary powered landing problem, the challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies, and low thrust vehicles. The nonlinear gravity fields cannot be represented by a constant gravity model nor a Newtonian model. The trajectory design algorithm needs to be robust and efficient to guarantee a designed trajectory and complete the calculations in a reasonable time frame. This research investigates the following questions: Can convex optimization be used to design the minimum propellant powered descent trajectory for a soft landing on an asteroid? Is this method robust and reliable to allow autonomy onboard the spacecraft without interaction from ground control? This research designed a convex optimization based method that rapidly generates the propellant optimal asteroid powered descent trajectory. The solution to the convex optimization problem is the thrust magnitude and direction, which designs and determines the trajectory. The propellant optimal problem was formulated as a second order cone program, a subset of convex optimization, through relaxation techniques by including a slack variable, change of variables, and incorporation of the successive solution method. Convex optimization solvers, especially second order cone programs, are robust, reliable, and are guaranteed to find the global minimum provided one exists. In addition, an outer optimization loop using Brent's method determines the optimal flight time corresponding to the minimum propellant usage over all flight times. Inclusion of additional trajectory constraints, solely vertical motion near the landing site and glide slope, were evaluated. Through a theoretical proof involving the Minimum Principle from Optimal Control Theory and the Karush-Kuhn-Tucker conditions it was shown that the relaxed problem is identical to the original problem at the minimum point. Therefore, the optimal solution of the relaxed problem is an optimal solution of the original problem, referred to as lossless convexification. A key finding is that this holds for all levels of gravity model fidelity. The designed thrust magnitude profiles were the bang-bang predicted by Optimal Control Theory. The first high fidelity gravity model employed was the 2x2 spherical harmonics model assuming a perfect triaxial ellipsoid and placement of the coordinate frame at the asteroid's center of mass and aligned with the semi-major axes. The spherical harmonics model is not valid inside the Brillouin sphere and this becomes relevant for irregularly shaped asteroids. Then, a higher fidelity model was implemented combining the 4x4 spherical harmonics gravity model with the interior spherical Bessel gravity model. All gravitational terms in the equations of motion are evaluated with the position vector from the previous iteration, creating the successive solution method. Methodology success was shown by applying the algorithm to three triaxial ellipsoidal asteroids with four different rotation speeds using the 2x2 gravity model. Finally, the algorithm was tested using the irregularly shaped asteroid, Castalia.
Hybrid Power Management-Based Vehicle Architecture
NASA Technical Reports Server (NTRS)
Eichenberg, Dennis J.
2011-01-01
Hybrid Power Management (HPM) is the integration of diverse, state-of-the-art power devices in an optimal configuration for space and terrestrial applications (s ee figure). The appropriate application and control of the various power devices significantly improves overall system performance and efficiency. The basic vehicle architecture consists of a primary power source, and possibly other power sources, that provides all power to a common energy storage system that is used to power the drive motors and vehicle accessory systems. This architecture also provides power as an emergency power system. Each component is independent, permitting it to be optimized for its intended purpose. The key element of HPM is the energy storage system. All generated power is sent to the energy storage system, and all loads derive their power from that system. This can significantly reduce the power requirement of the primary power source, while increasing the vehicle reliability. Ultracapacitors are ideal for an HPM-based energy storage system due to their exceptionally long cycle life, high reliability, high efficiency, high power density, and excellent low-temperature performance. Multiple power sources and multiple loads are easily incorporated into an HPM-based vehicle. A gas turbine is a good primary power source because of its high efficiency, high power density, long life, high reliability, and ability to operate on a wide range of fuels. An HPM controller maintains optimal control over each vehicle component. This flexible operating system can be applied to all vehicles to considerably improve vehicle efficiency, reliability, safety, security, and performance. The HPM-based vehicle architecture has many advantages over conventional vehicle architectures. Ultracapacitors have a much longer cycle life than batteries, which greatly improves system reliability, reduces life-of-system costs, and reduces environmental impact as ultracapacitors will probably never need to be replaced and disposed of. The environmentally safe ultracapacitor components reduce disposal concerns, and their recyclable nature reduces the environmental impact. High ultracapacitor power density provides high power during surges, and the ability to absorb high power during recharging. Ultracapacitors are extremely efficient in capturing recharging energy, are rugged, reliable, maintenance-free, have excellent lowtemperature characteristic, provide consistent performance over time, and promote safety as they can be left indefinitely in a safe, discharged state whereas batteries cannot.
Zhang, Xingwu; Wang, Chenxi; Gao, Robert X.; Yan, Ruqiang; Chen, Xuefeng; Wang, Shibin
2016-01-01
Milling vibration is one of the most serious factors affecting machining quality and precision. In this paper a novel hybrid error criterion-based frequency-domain LMS active control method is constructed and used for vibration suppression of milling processes by piezoelectric actuators and sensors, in which only one Fast Fourier Transform (FFT) is used and no Inverse Fast Fourier Transform (IFFT) is involved. The correction formulas are derived by a steepest descent procedure and the control parameters are analyzed and optimized. Then, a novel hybrid error criterion is constructed to improve the adaptability, reliability and anti-interference ability of the constructed control algorithm. Finally, based on piezoelectric actuators and acceleration sensors, a simulation of a spindle and a milling process experiment are presented to verify the proposed method. Besides, a protection program is added in the control flow to enhance the reliability of the control method in applications. The simulation and experiment results indicate that the proposed method is an effective and reliable way for on-line vibration suppression, and the machining quality can be obviously improved. PMID:26751448
Inspection planning development: An evolutionary approach using reliability engineering as a tool
NASA Technical Reports Server (NTRS)
Graf, David A.; Huang, Zhaofeng
1994-01-01
This paper proposes an evolutionary approach for inspection planning which introduces various reliability engineering tools into the process and assess system trade-offs among reliability, engineering requirement, manufacturing capability and inspection cost to establish an optimal inspection plan. The examples presented in the paper illustrate some advantages and benefits of the new approach. Through the analysis, reliability and engineering impacts due to manufacturing process capability and inspection uncertainty are clearly understood; the most cost effective and efficient inspection plan can be established and associated risks are well controlled; some inspection reductions and relaxations are well justified; and design feedbacks and changes may be initiated from the analysis conclusion to further enhance reliability and reduce cost. The approach is particularly promising as global competitions and customer quality improvement expectations are rapidly increasing.
NASA Astrophysics Data System (ADS)
Madhikar, Pratik Ravindra
The most important and crucial design feature while designing an Aircraft Electric Power Distribution System (EPDS) is reliability. In EPDS, the distribution of power is from top level generators to bottom level loads through various sensors, actuators and rectifiers with the help of AC & DC buses and control switches. As the demands of the consumer is never ending and the safety is utmost important, there is an increase in loads and as a result increase in power management. Therefore, the design of an EPDS should be optimized to have maximum efficiency. This thesis discusses an integrated tool that is based on a Need Based Design method and Fault Tree Analysis (FTA) to achieve the optimum design of an EPDS to provide maximum reliability in terms of continuous connectivity, power management and minimum cost. If an EPDS is formulated as an optimization problem then it can be solved with the help of connectivity, cost and power constraints by using a linear solver to get the desired output of maximum reliability at minimum cost. Furthermore, the thesis also discusses the viability and implementation of the resulted topology on typical large aircraft specifications.
Preliminary Full-Scale Tests of the Center for Automated Processing of Hardwoods' Auto-Image
Philip A. Araman; Janice K. Wiedenbeck
1995-01-01
Automated lumber grading and yield optimization using computer controlled saws will be plausible for hardwoods if and when lumber scanning systems can reliably identify all defects by type. Existing computer programs could then be used to grade the lumber, identify the best cut-up solution, and control the sawing machines. The potential value of a scanning grading...
Optimal Operation of a Thermal Energy Storage Tank Using Linear Optimization
NASA Astrophysics Data System (ADS)
Civit Sabate, Carles
In this thesis, an optimization procedure for minimizing the operating costs of a Thermal Energy Storage (TES) tank is presented. The facility in which the optimization is based is the combined cooling, heating, and power (CCHP) plant at the University of California, Irvine. TES tanks provide the ability of decoupling the demand of chilled water from its generation, over the course of a day, from the refrigeration and air-conditioning plants. They can be used to perform demand-side management, and optimization techniques can help to approach their optimal use. The proposed optimization approach provides a fast and reliable methodology of finding the optimal use of the TES tank to reduce energy costs and provides a tool for future implementation of optimal control laws on the system. Advantages of the proposed methodology are studied using simulation with historical data.
Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids
NASA Technical Reports Server (NTRS)
Pinson, Robin M.; Lu, Ping
2016-01-01
Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from the ground control. The propellant optimal control problem in this work is to determine the optimal finite thrust vector to land the spacecraft at a specified location, in the presence of a highly nonlinear gravity field, subject to various mission and operational constraints. The proposed solution uses convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process for a fixed final time problem. In addition, a second optimization method is wrapped around the convex optimization problem to determine the optimal flight time that yields the lowest propellant usage over all flight times. Gravity models designed for irregularly shaped asteroids are investigated. Success of the algorithm is demonstrated by designing powered descent trajectories for the elongated binary asteroid Castalia.
An intelligent remote control system for ECEI on EAST
NASA Astrophysics Data System (ADS)
Chen, Dongxu; Zhu, Yilun; Zhao, Zhenling; Qu, Chengming; Liao, Wang; Xie, Jinlin; Liu, Wandong
2017-08-01
An intelligent remote control system based on a power distribution unit (PDU) and Arduino has been designed for the electron cyclotron emission imaging (ECEI) system on Experimental Advanced Superconducting Tokamak (EAST). This intelligent system has three major functions: ECEI system reboot, measurement region adjustment and signal amplitude optimization. The observation region of ECEI can be modified for different physics proposals by remotely tuning the optical and electronics systems. Via the remote adjustment of the attenuation level, the ECEI intermediate frequency signal amplitude can be efficiently optimized. The remote control system provides a feasible and reliable solution for the improvement of signal quality and the efficiency of the ECEI diagnostic system, which is also valuable for other diagnostic systems.
NASA Technical Reports Server (NTRS)
Orme, John S.
1995-01-01
The performance seeking control algorithm optimizes total propulsion system performance. This adaptive, model-based optimization algorithm has been successfully flight demonstrated on two engines with differing levels of degradation. Models of the engine, nozzle, and inlet produce reliable, accurate estimates of engine performance. But, because of an observability problem, component levels of degradation cannot be accurately determined. Depending on engine-specific operating characteristics PSC achieves various levels performance improvement. For example, engines with more deterioration typically operate at higher turbine temperatures than less deteriorated engines. Thus when the PSC maximum thrust mode is applied, for example, there will be less temperature margin available to be traded for increasing thrust.
A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process
NASA Astrophysics Data System (ADS)
Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa
2017-06-01
High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.
A stepwise, multi-objective, multi-variable parameter optimization method for the APEX model
USDA-ARS?s Scientific Manuscript database
Proper parameterization enables hydrological models to make reliable estimates of non-point source pollution for effective control measures. The automatic calibration of hydrologic models requires significant computational power limiting its application. The study objective was to develop and eval...
Short-term Operation of Multi-purpose Reservoir using Model Predictive Control
NASA Astrophysics Data System (ADS)
Uysal, Gokcen; Schwanenberg, Dirk; Alvarado Montero, Rodolfo; Sensoy, Aynur; Arda Sorman, Ali
2017-04-01
Operation of water structures especially with conflicting water supply and flood mitigation objectives is under more stress attributed to growing water demand and changing hydro-climatic conditions. Model Predictive Control (MPC) based optimal control solutions has been successfully applied to different water resources applications. In this study, Feedback Control (FBC) and MPC get combined and an improved joint optimization-simulation operating scheme is proposed. Water supply and flood control objectives are fulfilled by incorporating the long term water supply objectives into a time-dependent variable guide curve policy whereas the extreme floods are attenuated by means of short-term optimization based on MPC. A final experiment is carried out to assess the lead time performance and reliability of forecasts in a hindcasting experiment with imperfect, perturbed forecasts. The framework is tested in Yuvacık Dam reservoir where the main water supply reservoir of Kocaeli City in the northwestern part of Turkey (the Marmara region) and it requires a challenging gate operation due to restricted downstream flow conditions.
Analysis of explicit model predictive control for path-following control
2018-01-01
In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration. PMID:29534080
Analysis of explicit model predictive control for path-following control.
Lee, Junho; Chang, Hyuk-Jun
2018-01-01
In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration.
NASA Technical Reports Server (NTRS)
Dennehy, Cornelius J.
2010-01-01
This final report summarizes the results of a comparative assessment of the fault tolerance and reliability of different Guidance, Navigation and Control (GN&C) architectural approaches. This study was proactively performed by a combined Massachusetts Institute of Technology (MIT) and Draper Laboratory team as a GN&C "Discipline-Advancing" activity sponsored by the NASA Engineering and Safety Center (NESC). This systematic comparative assessment of GN&C system architectural approaches was undertaken as a fundamental step towards understanding the opportunities for, and limitations of, architecting highly reliable and fault tolerant GN&C systems composed of common avionic components. The primary goal of this study was to obtain architectural 'rules of thumb' that could positively influence future designs in the direction of an optimized (i.e., most reliable and cost-efficient) GN&C system. A secondary goal was to demonstrate the application and the utility of a systematic modeling approach that maps the entire possible architecture solution space.
Quad-rotor flight path energy optimization
NASA Astrophysics Data System (ADS)
Kemper, Edward
Quad-Rotor unmanned areal vehicles (UAVs) have been a popular area of research and development in the last decade, especially with the advent of affordable microcontrollers like the MSP 430 and the Raspberry Pi. Path-Energy Optimization is an area that is well developed for linear systems. In this thesis, this idea of path-energy optimization is extended to the nonlinear model of the Quad-rotor UAV. The classical optimization technique is adapted to the nonlinear model that is derived for the problem at hand, coming up with a set of partial differential equations and boundary value conditions to solve these equations. Then, different techniques to implement energy optimization algorithms are tested using simulations in Python. First, a purely nonlinear approach is used. This method is shown to be computationally intensive, with no practical solution available in a reasonable amount of time. Second, heuristic techniques to minimize the energy of the flight path are tested, using Ziegler-Nichols' proportional integral derivative (PID) controller tuning technique. Finally, a brute force look-up table based PID controller is used. Simulation results of the heuristic method show that both reliable control of the system and path-energy optimization are achieved in a reasonable amount of time.
NASA Astrophysics Data System (ADS)
Gong, Xiaoyan; Li, Ying; Zhang, Yongqiang
2018-06-01
In view of the enlargement of fully mechanized face excavation and long distance driving, gas emission and dust production increase greatly. However, the current ventilation device direction angle, caliber and front-back distance cannot change dynamically at any time, resulting in the serious accumulation in the dead zone. In this paper, a new device were proposed that can solve above problems. Finite element ANSYS software were used to simulate and optimize the structural safety of the control device' key components. The optimization results showed that the equivalent stress decreases by 49%; after the optimization of deformation and mass are 0.829mm and 0.548kg, which were 21% and 10% lower than before.The quality, safety, reliability and cost of the control device reach the expected standards perfectly, which can meet the requirements of safe ventilation and down-dusting of fully mechanized face.
Supercritical tests of a self-optimizing, variable-Camber wind tunnel model
NASA Technical Reports Server (NTRS)
Levinsky, E. S.; Palko, R. L.
1979-01-01
A testing procedure was used in a 16-foot Transonic Propulsion Wind Tunnel which leads to optimum wing airfoil sections without stopping the tunnel for model changes. Being experimental, the optimum shapes obtained incorporate various three-dimensional and nonlinear viscous and transonic effects not included in analytical optimization methods. The method is a closed-loop, computer-controlled, interactive procedure and employs a Self-Optimizing Flexible Technology wing semispan model that conformally adapts the airfoil section at two spanwise control stations to maximize or minimize various prescribed merit functions subject to both equality and inequality constraints. The model, which employed twelve independent hydraulic actuator systems and flexible skins, was also used for conventional testing. Although six of seven optimizations attempted were at least partially convergent, further improvements in model skin smoothness and hydraulic reliability are required to make the technique fully operational.
Optimal Redundancy Management in Reconfigurable Control Systems Based on Normalized Nonspecificity
NASA Technical Reports Server (NTRS)
Wu, N.Eva; Klir, George J.
1998-01-01
In this paper the notion of normalized nonspecificity is introduced. The nonspecifity measures the uncertainty of the estimated parameters that reflect impairment in a controlled system. Based on this notion, a quantity called a reconfiguration coverage is calculated. It represents the likelihood of success of a control reconfiguration action. This coverage links the overall system reliability to the achievable and required control, as well as diagnostic performance. The coverage, when calculated on-line, is used for managing the redundancy in the system.
Smart grid technologies in local electric grids
NASA Astrophysics Data System (ADS)
Lezhniuk, Petro D.; Pijarski, Paweł; Buslavets, Olga A.
2017-08-01
The research is devoted to the creation of favorable conditions for the integration of renewable sources of energy into electric grids, which were designed to be supplied from centralized generation at large electric power stations. Development of distributed generation in electric grids influences the conditions of their operation - conflict of interests arises. The possibility of optimal functioning of electric grids and renewable sources of energy, when complex criterion of the optimality is balance reliability of electric energy in local electric system and minimum losses of electric energy in it. Multilevel automated system for power flows control in electric grids by means of change of distributed generation of power is developed. Optimization of power flows is performed by local systems of automatic control of small hydropower stations and, if possible, solar power plants.
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.
A Higher Harmonic Optimal Controller to Optimise Rotorcraft Aeromechanical Behaviour
NASA Technical Reports Server (NTRS)
Leyland, Jane Anne
1996-01-01
Three methods to optimize rotorcraft aeromechanical behavior for those cases where the rotorcraft plant can be adequately represented by a linear model system matrix were identified and implemented in a stand-alone code. These methods determine the optimal control vector which minimizes the vibration metric subject to constraints at discrete time points, and differ from the commonly used non-optimal constraint penalty methods such as those employed by conventional controllers in that the constraints are handled as actual constraints to an optimization problem rather than as just additional terms in the performance index. The first method is to use a Non-linear Programming algorithm to solve the problem directly. The second method is to solve the full set of non-linear equations which define the necessary conditions for optimality. The third method is to solve each of the possible reduced sets of equations defining the necessary conditions for optimality when the constraints are pre-selected to be either active or inactive, and then to simply select the best solution. The effects of maneuvers and aeroelasticity on the systems matrix are modelled by using a pseudo-random pseudo-row-dependency scheme to define the systems matrix. Cases run to date indicate that the first method of solution is reliable, robust, and easiest to use, and that it was superior to the conventional controllers which were considered.
NASA Technical Reports Server (NTRS)
Patten, William Neff
1989-01-01
There is an evident need to discover a means of establishing reliable, implementable controls for systems that are plagued by nonlinear and, or uncertain, model dynamics. The development of a generic controller design tool for tough-to-control systems is reported. The method utilizes a moving grid, time infinite element based solution of the necessary conditions that describe an optimal controller for a system. The technique produces a discrete feedback controller. Real time laboratory experiments are now being conducted to demonstrate the viability of the method. The algorithm that results is being implemented in a microprocessor environment. Critical computational tasks are accomplished using a low cost, on-board, multiprocessor (INMOS T800 Transputers) and parallel processing. Progress to date validates the methodology presented. Applications of the technique to the control of highly flexible robotic appendages are suggested.
Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method
NASA Astrophysics Data System (ADS)
Zhang, Xiangnan
2018-03-01
A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.
Beyond reliability to profitability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond, T.H.; Mitchell, J.S.
1996-07-01
Reliability concerns have controlled much of power generation design and operations. Emerging from a strictly regulated environment, profitability is becoming a much more important concept for today`s power generation executives. This paper discusses the conceptual advance-view power plant maintenance as a profit center, go beyond reliability, and embrace profitability. Profit Centered Maintenance begins with the premise that financial considerations, namely profitability, drive most aspects of modern process and manufacturing operations. Profit Centered Maintenance is a continuous process of reliability and administrative improvement and optimization. For the power generation executives with troublesome maintenance programs, Profit Centered Maintenance can be the blueprintmore » to increased profitability. It requires the culture change to make decisions based on value, to reengineer the administration of maintenance, and to enable the people performing and administering maintenance to make the most of available maintenance information technology. The key steps are to optimize the physical function of maintenance and to resolve recurring maintenance problems so that the need for maintenance can be reduced. Profit Centered Maintenance is more than just an attitude it is a path to profitability, be it resulting in increased profits or increased market share.« less
NASA Technical Reports Server (NTRS)
Wong, P. K.
1975-01-01
The closely-related problems of designing reliable feedback stabilization strategy and coordinating decentralized feedbacks are considered. Two approaches are taken. A geometric characterization of the structure of control interaction (and its dual) was first attempted and a concept of structural homomorphism developed based on the idea of 'similarity' of interaction pattern. The idea of finding classes of individual feedback maps that do not 'interfere' with the stabilizing action of each other was developed by identifying the structural properties of nondestabilizing and LQ-optimal feedback maps. Some known stability properties of LQ-feedback were generalized and some partial solutions were provided to the reliable stabilization and decentralized feedback coordination problems. A concept of coordination parametrization was introduced, and a scheme for classifying different modes of decentralization (information, control law computation, on-line control implementation) in control systems was developed.
Intelligent Tires Based on Measurement of Tire Deformation
NASA Astrophysics Data System (ADS)
Matsuzaki, Ryosuke; Todoroki, Akira
From a traffic safety point-of-view, there is an urgent need for intelligent tires as a warning system for road conditions, for optimized braking control on poor road surfaces and as a tire fault detection system. Intelligent tires, equipped with sensors for monitoring applied strain, are effective in improving reliability and control systems such as anti-lock braking systems (ABSs). In previous studies, we developed a direct tire deformation or strain measurement system with sufficiently low stiffness and high elongation for practical use, and a wireless communication system between tires and vehicle that operates without a battery. The present study investigates the application of strain data for an optimized braking control and road condition warning system. The relationships between strain sensor outputs and tire mechanical parameters, including braking torque, effective radius and contact patch length, are calculated using finite element analysis. Finally, we suggested the possibility of optimized braking control and road condition warning systems. Optimized braking control can be achieved by keeping the slip ratio constant. The road condition warning would be actuated if the recorded friction coefficient at a certain slip ratio is lower than a ‘safe’ reference value.
Intelligent tires for improved tire safety using wireless strain measurement
NASA Astrophysics Data System (ADS)
Matsuzaki, Ryosuke; Todoroki, Akira
2008-03-01
From a traffic safety point-of-view, there is an urgent need for intelligent tires as a warning system for road conditions, for optimized braking control on poor road surfaces and as a tire fault detection system. Intelligent tires, equipped with sensors for monitoring applied strain, are effective in improving reliability and control systems such as anti-lock braking systems (ABSs). In previous studies, we developed a direct tire deformation or strain measurement system with sufficiently low stiffness and high elongation for practical use, and a wireless communication system between tires and vehicle that operates without a battery. The present study investigates the application of strain data for an optimized braking control and road condition warning system. The relationships between strain sensor outputs and tire mechanical parameters, including braking torque, effective radius and contact patch length, are calculated using finite element analysis. Finally, we suggested the possibility of optimized braking control and road condition warning systems. Optimized braking control can be achieved by keeping the slip ratio constant. The road condition warning would be actuated if the recorded friction coefficient at a certain slip ratio is lower than a 'safe' reference value.
Heat-transfer optimization of a high-spin thermal battery
NASA Astrophysics Data System (ADS)
Krieger, Frank C.
Recent advancements in thermal battery technology have produced batteries incorporating a fusible material heat reservoir for operating temperature control that operate reliably under the high spin rates often encountered in ordnance applications. Attention is presently given to the heat-transfer optimization of a high-spin thermal battery employing a nonfusible steel heat reservoir, on the basis of a computer code that simulated the effect of an actual fusible material heat reservoir on battery performance. Both heat paper and heat pellet employing thermal battery configurations were considered.
Ullah, Hakeem; Islam, Saeed; Khan, Ilyas; Shafie, Sharidan; Fiza, Mehreen
2015-01-01
In this paper we applied a new analytic approximate technique Optimal Homotopy Asymptotic Method (OHAM) for treatment of coupled differential-difference equations (DDEs). To see the efficiency and reliability of the method, we consider Relativistic Toda coupled nonlinear differential-difference equation. It provides us a convenient way to control the convergence of approximate solutions when it is compared with other methods of solution found in the literature. The obtained solutions show that OHAM is effective, simpler, easier and explicit.
Ullah, Hakeem; Islam, Saeed; Khan, Ilyas; Shafie, Sharidan; Fiza, Mehreen
2015-01-01
In this paper we applied a new analytic approximate technique Optimal Homotopy Asymptotic Method (OHAM) for treatment of coupled differential- difference equations (DDEs). To see the efficiency and reliability of the method, we consider Relativistic Toda coupled nonlinear differential-difference equation. It provides us a convenient way to control the convergence of approximate solutions when it is compared with other methods of solution found in the literature. The obtained solutions show that OHAM is effective, simpler, easier and explicit. PMID:25874457
Probabilistic Finite Element Analysis & Design Optimization for Structural Designs
NASA Astrophysics Data System (ADS)
Deivanayagam, Arumugam
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
Status and trends in active control technology
NASA Technical Reports Server (NTRS)
Rediess, H. A.; Szalai, K. J.
1975-01-01
The emergence of highly reliable fly-by-wire flight control systems makes it possible to consider a strong reliance on automatic control systems in the design optimization of future aircraft. This design philosophy has been referred to as the control configured vehicle approach or the application of active control technology. Several studies and flight tests sponsored by the Air Force and NASA have demonstrated the potential benefits of control configured vehicles and active control technology. The present status and trends of active control technology are reviewed and the impact it will have on aircraft designs, design techniques, and the designer is predicted.
Operating wind turbines in strong wind conditions by using feedforward-feedback control
NASA Astrophysics Data System (ADS)
Feng, Ju; Sheng, Wen Zhong
2014-12-01
Due to the increasing penetration of wind energy into power systems, it becomes critical to reduce the impact of wind energy on the stability and reliability of the overall power system. In precedent works, Shen and his co-workers developed a re-designed operation schema to run wind turbines in strong wind conditions based on optimization method and standard PI feedback control, which can prevent the typical shutdowns of wind turbines when reaching the cut-out wind speed. In this paper, a new control strategy combing the standard PI feedback control with feedforward controls using the optimization results is investigated for the operation of variable-speed pitch-regulated wind turbines in strong wind conditions. It is shown that the developed control strategy is capable of smoothening the power output of wind turbine and avoiding its sudden showdown at high wind speeds without worsening the loads on rotor and blades.
Finite element based electric motor design optimization
NASA Technical Reports Server (NTRS)
Campbell, C. Warren
1993-01-01
The purpose of this effort was to develop a finite element code for the analysis and design of permanent magnet electric motors. These motors would drive electromechanical actuators in advanced rocket engines. The actuators would control fuel valves and thrust vector control systems. Refurbishing the hydraulic systems of the Space Shuttle after each flight is costly and time consuming. Electromechanical actuators could replace hydraulics, improve system reliability, and reduce down time.
Philip A. Araman; Janice K. Wiedenbeck
1995-01-01
Automated lumber grading and yield optimization using computer controlled saws will be plausible for hardwoods if and when lumber scanning systems can reliably identify all defects by type. Existing computer programs could then be used to grade the lumber, identify the best cut-up solution, and control the sawing machines. The potential value of a scanning grading...
2009-10-01
phase and factors which may cause accelerated growth rates is key to achieving a reliable and robust bearing design . The end goal is to identify...key to achieving a reliable and robust bearing design . The end goal is to identify control parameters for optimizing bearing materials for improved...25.0 nm and were each fabricated from same material heats respectively to a custom design print to ABEC 5 quality and had split inner rings. Each had
Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang
2015-01-01
The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818
NASA Astrophysics Data System (ADS)
Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.
2005-05-01
A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.
Behavior and neural basis of near-optimal visual search
Ma, Wei Ji; Navalpakkam, Vidhya; Beck, Jeffrey M; van den Berg, Ronald; Pouget, Alexandre
2013-01-01
The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance. PMID:21552276
Data mining-based coefficient of influence factors optimization of test paper reliability
NASA Astrophysics Data System (ADS)
Xu, Peiyao; Jiang, Huiping; Wei, Jieyao
2018-05-01
Test is a significant part of the teaching process. It demonstrates the final outcome of school teaching through teachers' teaching level and students' scores. The analysis of test paper is a complex operation that has the characteristics of non-linear relation in the length of the paper, time duration and the degree of difficulty. It is therefore difficult to optimize the coefficient of influence factors under different conditions in order to get text papers with clearly higher reliability with general methods [1]. With data mining techniques like Support Vector Regression (SVR) and Genetic Algorithm (GA), we can model the test paper analysis and optimize the coefficient of impact factors for higher reliability. It's easy to find that the combination of SVR and GA can get an effective advance in reliability from the test results. The optimal coefficient of influence factors optimization has a practicability in actual application, and the whole optimizing operation can offer model basis for test paper analysis.
NASA Astrophysics Data System (ADS)
Doi, Masafumi; Tokutomi, Tsukasa; Hachiya, Shogo; Kobayashi, Atsuro; Tanakamaru, Shuhei; Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken
2016-08-01
NAND flash memory’s reliability degrades with increasing endurance, retention-time and/or temperature. After a comprehensive evaluation of 1X nm triple-level cell (TLC) NAND flash, two highly reliable techniques are proposed. The first proposal, quick low-density parity check (Quick-LDPC), requires only one cell read in order to accurately estimate a bit-error rate (BER) that includes the effects of temperature, write and erase (W/E) cycles and retention-time. As a result, 83% read latency reduction is achieved compared to conventional AEP-LDPC. Also, W/E cycling is extended by 100% compared with conventional Bose-Chaudhuri-Hocquenghem (BCH) error-correcting code (ECC). The second proposal, dynamic threshold voltage optimization (DVO) has two parts, adaptive V Ref shift (AVS) and V TH space control (VSC). AVS reduces read error and latency by adaptively optimizing the reference voltage (V Ref) based on temperature, W/E cycles and retention-time. AVS stores the optimal V Ref’s in a table in order to enable one cell read. VSC further improves AVS by optimizing the voltage margins between V TH states. DVO reduces BER by 80%.
Optimal multisensory decision-making in a reaction-time task.
Drugowitsch, Jan; DeAngelis, Gregory C; Klier, Eliana M; Angelaki, Dora E; Pouget, Alexandre
2014-06-14
Humans and animals can integrate sensory evidence from various sources to make decisions in a statistically near-optimal manner, provided that the stimulus presentation time is fixed across trials. Little is known about whether optimality is preserved when subjects can choose when to make a decision (reaction-time task), nor when sensory inputs have time-varying reliability. Using a reaction-time version of a visual/vestibular heading discrimination task, we show that behavior is clearly sub-optimal when quantified with traditional optimality metrics that ignore reaction times. We created a computational model that accumulates evidence optimally across both cues and time, and trades off accuracy with decision speed. This model quantitatively explains subjects's choices and reaction times, supporting the hypothesis that subjects do, in fact, accumulate evidence optimally over time and across sensory modalities, even when the reaction time is under the subject's control.
Multichannel temperature controller for hot air solar house
NASA Technical Reports Server (NTRS)
Currie, J. R.
1979-01-01
This paper describes an electronic controller that is optimized to operate a hot air solar system. Thermal information is obtained from copper constantan thermocouples and a wall-type thermostat. The signals from the thermocouples are processed through a single amplifier using a multiplexing scheme. The multiplexing reduces the component count and automatically calibrates the thermocouple amplifier. The processed signals connect to some simple logic that selects one of the four operating modes. This simple, inexpensive, and reliable scheme is well suited to control hot air solar systems.
Development of a nanosatellite de-orbiting system by reliability based design optimization
NASA Astrophysics Data System (ADS)
Nikbay, Melike; Acar, Pınar; Aslan, Alim Rüstem
2015-12-01
This paper presents design approaches to develop a reliable and efficient de-orbiting system for the 3USAT nanosatellite to provide a beneficial orbital decay process at the end of a mission. A de-orbiting system is initially designed by employing the aerodynamic drag augmentation principle where the structural constraints of the overall satellite system and the aerodynamic forces are taken into account. Next, an alternative de-orbiting system is designed with new considerations and further optimized using deterministic and reliability based design techniques. For the multi-objective design, the objectives are chosen to maximize the aerodynamic drag force through the maximization of the Kapton surface area while minimizing the de-orbiting system mass. The constraints are related in a deterministic manner to the required deployment force, the height of the solar panel hole and the deployment angle. The length and the number of layers of the deployable Kapton structure are used as optimization variables. In the second stage of this study, uncertainties related to both manufacturing and operating conditions of the deployable structure in space environment are considered. These uncertainties are then incorporated into the design process by using different probabilistic approaches such as Monte Carlo Simulation, the First-Order Reliability Method and the Second-Order Reliability Method. The reliability based design optimization seeks optimal solutions using the former design objectives and constraints with the inclusion of a reliability index. Finally, the de-orbiting system design alternatives generated by different approaches are investigated and the reliability based optimum design is found to yield the best solution since it significantly improves both system reliability and performance requirements.
Solar Newsletter | Solar Research | NREL
, General Electric Optimize Voltage Control for Utility-Scale PV As utilities increasingly add solar power components that may be used to integrate distributed solar PV onto distribution systems. More than 335 data Innovation Award for Grid Reliability PV Demonstration First Solar, the California Independent System
Evaluation of neural network modeing to calculate well-watered leaf temperature of wine grape
USDA-ARS?s Scientific Manuscript database
Mild to moderate water stress is desirable in wine grape for controlling vine vigor and optimizing fruit yield and quality, but precision irrigation management is hindered by the lack of a reliable method to easily quantify and monitor vine water status. The crop water stress index (CWSI) that effec...
Environment assisted degradation mechanisms in advanced light metals
NASA Technical Reports Server (NTRS)
Gangloff, Richard P.; Stoner, Glenn E.; Swanson, Robert E.
1988-01-01
The general goals of the research program are to characterize alloy behavior quantitatively and to develop predictive mechanisms for environmental failure modes. Successes in this regard will provide the basis for metallurgical optimization of alloy performance, for chemical control of aggressive environments, and for engineering life prediction with damage tolerance and long term reliability.
Kim, Jae-Woo; Jeong, Jin-Woo; Kang, Jun-Tae; Choi, Sungyoul; Ahn, Seungjoon; Song, Yoon-Ho
2014-02-14
Highly reliable field electron emitters were developed using a formulation for reproducible damage-free carbon nanotube (CNT) composite pastes with optimal inorganic fillers and a ball-milling method. We carefully controlled the ball-milling sequence and time to avoid any damage to the CNTs, which incorporated fillers that were fully dispersed as paste constituents. The field electron emitters fabricated by printing the CNT pastes were found to exhibit almost perfect adhesion of the CNT emitters to the cathode, along with good uniformity and reproducibility. A high field enhancement factor of around 10,000 was achieved from the CNT field emitters developed. By selecting nano-sized metal alloys and oxides and using the same formulation sequence, we also developed reliable field emitters that could survive high-temperature post processing. These field emitters had high durability to post vacuum annealing at 950 °C, guaranteeing survival of the brazing process used in the sealing of field emission x-ray tubes. We evaluated the field emitters in a triode configuration in the harsh environment of a tiny vacuum-sealed vessel and observed very reliable operation for 30 h at a high current density of 350 mA cm(-2). The CNT pastes and related field emitters that were developed could be usefully applied in reliable field emission devices.
NASA Astrophysics Data System (ADS)
Kim, Jae-Woo; Jeong, Jin-Woo; Kang, Jun-Tae; Choi, Sungyoul; Ahn, Seungjoon; Song, Yoon-Ho
2014-02-01
Highly reliable field electron emitters were developed using a formulation for reproducible damage-free carbon nanotube (CNT) composite pastes with optimal inorganic fillers and a ball-milling method. We carefully controlled the ball-milling sequence and time to avoid any damage to the CNTs, which incorporated fillers that were fully dispersed as paste constituents. The field electron emitters fabricated by printing the CNT pastes were found to exhibit almost perfect adhesion of the CNT emitters to the cathode, along with good uniformity and reproducibility. A high field enhancement factor of around 10 000 was achieved from the CNT field emitters developed. By selecting nano-sized metal alloys and oxides and using the same formulation sequence, we also developed reliable field emitters that could survive high-temperature post processing. These field emitters had high durability to post vacuum annealing at 950 °C, guaranteeing survival of the brazing process used in the sealing of field emission x-ray tubes. We evaluated the field emitters in a triode configuration in the harsh environment of a tiny vacuum-sealed vessel and observed very reliable operation for 30 h at a high current density of 350 mA cm-2. The CNT pastes and related field emitters that were developed could be usefully applied in reliable field emission devices.
An optimal ultrasonographic diagnostic test for early gout: A prospective controlled study
Petraitis, Mykolas; Apanaviciene, Indre; Virviciute, Dalia; Baranauskaite, Asta
2017-01-01
Objective To identify the optimal sites for classification of early gout by ultrasonography. Methods Sixty patients with monosodium urate crystal-proven gout (25 with early gout [≤2-year symptom duration], 35 with late gout [>2-year symptom duration], and 36 normouricemic healthy controls) from one centre were prospectively evaluated. Standardized blinded ultrasound examination of 36 joints and the triceps and patellar tendons was performed to identify tophi and the double contour (DC) sign. Results Ultrasonographic sensitivity was lower in early than late gout. Binary logistic regression analysis showed that two ultrasonographic signs (tophi in the first metatarsophalangeal joint [odds ratio, 16.46] and the DC sign in the ankle [odds ratio, 25.18]) significantly contributed to the final model for early gout diagnosis (sensitivity and specificity of 84% and 81%, respectively). The inter-reader reliability kappa value for the DC sign and tophi was 0.712. Conclusions Four-joint investigation (both first metatarsophalangeal joints for tophi and both ankles for the DC sign) is feasible and reliable and could be proposed as a screening test for early ultrasonographic gout classification in daily practice. PMID:28617199
An optimal ultrasonographic diagnostic test for early gout: A prospective controlled study.
Norkuviene, Eleonora; Petraitis, Mykolas; Apanaviciene, Indre; Virviciute, Dalia; Baranauskaite, Asta
2017-08-01
Objective To identify the optimal sites for classification of early gout by ultrasonography. Methods Sixty patients with monosodium urate crystal-proven gout (25 with early gout [≤2-year symptom duration], 35 with late gout [>2-year symptom duration], and 36 normouricemic healthy controls) from one centre were prospectively evaluated. Standardized blinded ultrasound examination of 36 joints and the triceps and patellar tendons was performed to identify tophi and the double contour (DC) sign. Results Ultrasonographic sensitivity was lower in early than late gout. Binary logistic regression analysis showed that two ultrasonographic signs (tophi in the first metatarsophalangeal joint [odds ratio, 16.46] and the DC sign in the ankle [odds ratio, 25.18]) significantly contributed to the final model for early gout diagnosis (sensitivity and specificity of 84% and 81%, respectively). The inter-reader reliability kappa value for the DC sign and tophi was 0.712. Conclusions Four-joint investigation (both first metatarsophalangeal joints for tophi and both ankles for the DC sign) is feasible and reliable and could be proposed as a screening test for early ultrasonographic gout classification in daily practice.
NASA Astrophysics Data System (ADS)
Manzanares-Filho, N.; Albuquerque, R. B. F.; Sousa, B. S.; Santos, L. G. C.
2018-06-01
This article presents a comparative study of some versions of the controlled random search algorithm (CRSA) in global optimization problems. The basic CRSA, originally proposed by Price in 1977 and improved by Ali et al. in 1997, is taken as a starting point. Then, some new modifications are proposed to improve the efficiency and reliability of this global optimization technique. The performance of the algorithms is assessed using traditional benchmark test problems commonly invoked in the literature. This comparative study points out the key features of the modified algorithm. Finally, a comparison is also made in a practical engineering application, namely the inverse aerofoil shape design.
NASA Technical Reports Server (NTRS)
Manning, Robert M.
1990-01-01
A static and dynamic rain-attenuation model is presented which describes the statistics of attenuation on an arbitrarily specified satellite link for any location for which there are long-term rainfall statistics. The model may be used in the design of the optimal stochastic control algorithms to mitigate the effects of attenuation and maintain link reliability. A rain-statistics data base is compiled, which makes it possible to apply the model to any location in the continental U.S. with a resolution of 0-5 degrees in latitude and longitude. The model predictions are compared with experimental observations, showing good agreement.
Optimal periodic proof test based on cost-effective and reliability criteria
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1976-01-01
An exploratory study for the optimization of periodic proof tests for fatigue-critical structures is presented. The optimal proof load level and the optimal number of periodic proof tests are determined by minimizing the total expected (statistical average) cost, while the constraint on the allowable level of structural reliability is satisfied. The total expected cost consists of the expected cost of proof tests, the expected cost of structures destroyed by proof tests, and the expected cost of structural failure in service. It is demonstrated by numerical examples that significant cost saving and reliability improvement for fatigue-critical structures can be achieved by the application of the optimal periodic proof test. The present study is relevant to the establishment of optimal maintenance procedures for fatigue-critical structures.
Optimal Control of Distributed Energy Resources using Model Predictive Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony T.; Kalsi, Karanjit; Elizondo, Marcelo A.
2012-07-22
In an isolated power system (rural microgrid), Distributed Energy Resources (DERs) such as renewable energy resources (wind, solar), energy storage and demand response can be used to complement fossil fueled generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation. The problem is formulated as a multi-objective optimization problem with the goals of minimizing fuel costs and changes in power output of diesel generators, minimizingmore » costs associated with low battery life of energy storage and maintaining system frequency at the nominal operating value. Two control modes are considered for controlling the energy storage to compensate either net load variability or wind variability. Model predictive control (MPC) is used to solve the aforementioned problem and the performance is compared to an open-loop look-ahead dispatch problem. Simulation studies using high and low wind profiles, as well as, different MPC prediction horizons demonstrate the efficacy of the closed-loop MPC in compensating for uncertainties in wind and demand.« less
Reliability prediction of large fuel cell stack based on structure stress analysis
NASA Astrophysics Data System (ADS)
Liu, L. F.; Liu, B.; Wu, C. W.
2017-09-01
The aim of this paper is to improve the reliability of Proton Electrolyte Membrane Fuel Cell (PEMFC) stack by designing the clamping force and the thickness difference between the membrane electrode assembly (MEA) and the gasket. The stack reliability is directly determined by the component reliability, which is affected by the material property and contact stress. The component contact stress is a random variable because it is usually affected by many uncertain factors in the production and clamping process. We have investigated the influences of parameter variation coefficient on the probability distribution of contact stress using the equivalent stiffness model and the first-order second moment method. The optimal contact stress to make the component stay in the highest level reliability is obtained by the stress-strength interference model. To obtain the optimal contact stress between the contact components, the optimal thickness of the component and the stack clamping force are optimally designed. Finally, a detailed description is given how to design the MEA and gasket dimensions to obtain the highest stack reliability. This work can provide a valuable guidance in the design of stack structure for a high reliability of fuel cell stack.
On reliable control system designs. Ph.D. Thesis; [actuators
NASA Technical Reports Server (NTRS)
Birdwell, J. D.
1978-01-01
A mathematical model for use in the design of reliable multivariable control systems is discussed with special emphasis on actuator failures and necessary actuator redundancy levels. The model consists of a linear time invariant discrete time dynamical system. Configuration changes in the system dynamics are governed by a Markov chain that includes transition probabilities from one configuration state to another. The performance index is a standard quadratic cost functional, over an infinite time interval. The actual system configuration can be deduced with a one step delay. The calculation of the optimal control law requires the solution of a set of highly coupled Riccati-like matrix difference equations. Results can be used for off-line studies relating the open loop dynamics, required performance, actuator mean time to failure, and functional or identical actuator redundancy, with and without feedback gain reconfiguration strategies.
Optimal Sensor Location Design for Reliable Fault Detection in Presence of False Alarms
Yang, Fan; Xiao, Deyun; Shah, Sirish L.
2009-01-01
To improve fault detection reliability, sensor location should be designed according to an optimization criterion with constraints imposed by issues of detectability and identifiability. Reliability requires the minimization of undetectability and false alarm probability due to random factors on sensor readings, which is not only related with sensor readings but also affected by fault propagation. This paper introduces the reliability criteria expression based on the missed/false alarm probability of each sensor and system topology or connectivity derived from the directed graph. The algorithm for the optimization problem is presented as a heuristic procedure. Finally, a boiler system is illustrated using the proposed method. PMID:22291524
Reliability-Based Design Optimization of a Composite Airframe Component
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2009-01-01
A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.
Optimization of Statistical Methods Impact on Quantitative Proteomics Data.
Pursiheimo, Anna; Vehmas, Anni P; Afzal, Saira; Suomi, Tomi; Chand, Thaman; Strauss, Leena; Poutanen, Matti; Rokka, Anne; Corthals, Garry L; Elo, Laura L
2015-10-02
As tools for quantitative label-free mass spectrometry (MS) rapidly develop, a consensus about the best practices is not apparent. In the work described here we compared popular statistical methods for detecting differential protein expression from quantitative MS data using both controlled experiments with known quantitative differences for specific proteins used as standards as well as "real" experiments where differences in protein abundance are not known a priori. Our results suggest that data-driven reproducibility-optimization can consistently produce reliable differential expression rankings for label-free proteome tools and are straightforward in their application.
Optimization and real-time control for laser treatment of heterogeneous soft tissues.
Feng, Yusheng; Fuentes, David; Hawkins, Andrea; Bass, Jon M; Rylander, Marissa Nichole
2009-01-01
Predicting the outcome of thermotherapies in cancer treatment requires an accurate characterization of the bioheat transfer processes in soft tissues. Due to the biological and structural complexity of tumor (soft tissue) composition and vasculature, it is often very difficult to obtain reliable tissue properties that is one of the key factors for the accurate treatment outcome prediction. Efficient algorithms employing in vivo thermal measurements to determine heterogeneous thermal tissues properties in conjunction with a detailed sensitivity analysis can produce essential information for model development and optimal control. The goals of this paper are to present a general formulation of the bioheat transfer equation for heterogeneous soft tissues, review models and algorithms developed for cell damage, heat shock proteins, and soft tissues with nanoparticle inclusion, and demonstrate an overall computational strategy for developing a laser treatment framework with the ability to perform real-time robust calibrations and optimal control. This computational strategy can be applied to other thermotherapies using the heat source such as radio frequency or high intensity focused ultrasound.
Islam, Naz Niamul; Hannan, M A; Shareef, Hussain; Mohamed, Azah; Salam, M A
2014-01-01
Power oscillation damping controller is designed in linearized model with heuristic optimization techniques. Selection of the objective function is very crucial for damping controller design by optimization algorithms. In this research, comparative analysis has been carried out to evaluate the effectiveness of popular objective functions used in power system oscillation damping. Two-stage lead-lag damping controller by means of power system stabilizers is optimized using differential search algorithm for different objective functions. Linearized model simulations are performed to compare the dominant mode's performance and then the nonlinear model is continued to evaluate the damping performance over power system oscillations. All the simulations are conducted in two-area four-machine power system to bring a detailed analysis. Investigated results proved that multiobjective D-shaped function is an effective objective function in terms of moving unstable and lightly damped electromechanical modes into stable region. Thus, D-shape function ultimately improves overall system damping and concurrently enhances power system reliability.
Adaptive control for solar energy based DC microgrid system development
NASA Astrophysics Data System (ADS)
Zhang, Qinhao
During the upgrading of current electric power grid, it is expected to develop smarter, more robust and more reliable power systems integrated with distributed generations. To realize these objectives, traditional control techniques are no longer effective in either stabilizing systems or delivering optimal and robust performances. Therefore, development of advanced control methods has received increasing attention in power engineering. This work addresses two specific problems in the control of solar panel based microgrid systems. First, a new control scheme is proposed for the microgrid systems to achieve optimal energy conversion ratio in the solar panels. The control system can optimize the efficiency of the maximum power point tracking (MPPT) algorithm by implementing two layers of adaptive control. Such a hierarchical control architecture has greatly improved the system performance, which is validated through both mathematical analysis and computer simulation. Second, in the development of the microgrid transmission system, the issues related to the tele-communication delay and constant power load (CPL)'s negative incremental impedance are investigated. A reference model based method is proposed for pole and zero placements that address the challenges of the time delay and CPL in closed-loop control. The effectiveness of the proposed modeling and control design methods are demonstrated in a simulation testbed. Practical aspects of the proposed methods for general microgrid systems are also discussed.
Systems Integration Fact Sheet
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2016-06-01
This fact sheet is an overview of the Systems Integration subprogram at the U.S. Department of Energy SunShot Initiative. The Systems Integration subprogram enables the widespread deployment of safe, reliable, and cost-effective solar energy technologies by addressing the associated technical and non-technical challenges. These include timely and cost-effective interconnection procedures, optimal system planning, accurate prediction of solar resources, monitoring and control of solar power, maintaining grid reliability and stability, and many more. To address the challenges associated with interconnecting and integrating hundreds of gigawatts of solar power onto the electricity grid, the Systems Integration program funds research, development, and demonstrationmore » projects in four broad, interrelated focus areas: grid performance and reliability, dispatchability, power electronics, and communications.« less
A single-loop optimization method for reliability analysis with second order uncertainty
NASA Astrophysics Data System (ADS)
Xie, Shaojun; Pan, Baisong; Du, Xiaoping
2015-08-01
Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.
Improving the FLORIS wind plant model for compatibility with gradient-based optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Jared J.; Gebraad, Pieter MO; Ning, Andrew
The FLORIS (FLOw Redirection and Induction in Steady-state) model, a parametric wind turbine wake model that predicts steady-state wake characteristics based on wind turbine position and yaw angle, was developed for optimization of control settings and turbine locations. This article provides details on changes made to the FLORIS model to make the model more suitable for gradient-based optimization. Changes to the FLORIS model were made to remove discontinuities and add curvature to regions of non-physical zero gradient. Exact gradients for the FLORIS model were obtained using algorithmic differentiation. A set of three case studies demonstrate that using exact gradients withmore » gradient-based optimization reduces the number of function calls by several orders of magnitude. The case studies also show that adding curvature improves convergence behavior, allowing gradient-based optimization algorithms used with the FLORIS model to more reliably find better solutions to wind farm optimization problems.« less
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system.
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-10-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions.
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-01-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions. PMID:29278245
Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness
Pimentel-Niño, M. A.; Saxena, Paresh; Vazquez-Castro, M. A.
2015-01-01
A novel cross-layer optimized video adaptation driven by perceptual semantics is presented. The design target is streamed live video to enhance situational awareness in challenging communications conditions. Conventional solutions for recreational applications are inadequate and novel quality of experience (QoE) framework is proposed which allows fully controlled adaptation and enables perceptual semantic feedback. The framework relies on temporal/spatial abstraction for video applications serving beyond recreational purposes. An underlying cross-layer optimization technique takes into account feedback on network congestion (time) and erasures (space) to best distribute available (scarce) bandwidth. Systematic random linear network coding (SRNC) adds reliability while preserving perceptual semantics. Objective metrics of the perceptual features in QoE show homogeneous high performance when using the proposed scheme. Finally, the proposed scheme is in line with content-aware trends, by complying with information-centric-networking philosophy and architecture. PMID:26247057
Probabilistic Analysis and Design of a Raked Wing Tip for a Commercial Transport
NASA Technical Reports Server (NTRS)
Mason Brian H.; Chen, Tzi-Kang; Padula, Sharon L.; Ransom, Jonathan B.; Stroud, W. Jefferson
2008-01-01
An approach for conducting reliability-based design and optimization (RBDO) of a Boeing 767 raked wing tip (RWT) is presented. The goal is to evaluate the benefits of RBDO for design of an aircraft substructure. A finite-element (FE) model that includes eight critical static load cases is used to evaluate the response of the wing tip. Thirteen design variables that describe the thickness of the composite skins and stiffeners are selected to minimize the weight of the wing tip. A strain-based margin of safety is used to evaluate the performance of the structure. The randomness in the load scale factor and in the strain limits is considered. Of the 13 variables, the wing-tip design was controlled primarily by the thickness of the thickest plies in the upper skins. The report includes an analysis of the optimization results and recommendations for future reliability-based studies.
An R package for the design, analysis and operation of reservoir systems
NASA Astrophysics Data System (ADS)
Turner, Sean; Ng, Jia Yi; Galelli, Stefano
2016-04-01
We present a new R package - named "reservoir" - which has been designed for rapid and easy routing of runoff through storage. The package comprises well-established tools for capacity design (e.g., the sequent peak algorithm), performance analysis (storage-yield-reliability and reliability-resilience-vulnerability analysis) and release policy optimization (Stochastic Dynamic Programming). Operating rules can be optimized for water supply, flood control and amenity objectives, as well as for maximum hydropower production. Storage-depth-area relationships are in-built, allowing users to incorporate evaporation from the reservoir surface. We demonstrate the capabilities of the software for global studies using thousands of reservoirs from the Global Reservoir and Dam (GRanD) database fed by historical monthly inflow time series from a 0.5 degree gridded global runoff dataset. The package is freely available through the Comprehensive R Archive Network (CRAN).
Visible-blind ultraviolet photodetectors on porous silicon carbide substrates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naderi, N.; Hashim, M.R., E-mail: roslan@usm.my
2013-06-01
Highlights: • Highly reliable UV detectors are fabricated on porous silicon carbide substrates. • The optical properties of samples are enhanced by increasing the current density. • The optimized sample exhibits enhanced sensitivity to the incident UV radiation. - Abstract: Highly reliable visible-blind ultraviolet (UV) photodetectors were successfully fabricated on porous silicon carbide (PSC) substrates. High responsivity and high photoconductive gain were observed in a metal–semiconductor–metal ultraviolet photodetector that was fabricated on an optimized PSC substrate. The PSC samples were prepared via the UV-assisted photo-electrochemical etching of an n-type hexagonal silicon carbide (6H-SiC) substrate using different etching current densities. Themore » optical results showed that the current density is an outstanding etching parameter that controls the porosity and uniformity of PSC substrates. A highly porous substrate was synthesized using a suitable etching current density to enhance its light absorption, thereby improving the sensitivity of UV detector with this substrate. The electrical characteristics of fabricated devices on optimized PSC substrates exhibited enhanced sensitivity and responsivity to the incident radiation.« less
NREL's Energy Storage and REopt Teams Awarded $525k from TCF to Study
Commercial Viability of Optimal, Reliable Building-Integrated Energy Storage | News | NREL NREL's Energy Storage and REopt Teams Awarded $525k from TCF to Study Commercial Viability of Optimal Study Commercial Viability of Optimal, Reliable Building-Integrated Energy Storage November 14, 2017
Control and Optimization of Electric Ship Propulsion Systems with Hybrid Energy Storage
NASA Astrophysics Data System (ADS)
Hou, Jun
Electric ships experience large propulsion-load fluctuations on their drive shaft due to encountered waves and the rotational motion of the propeller, affecting the reliability of the shipboard power network and causing wear and tear. This dissertation explores new solutions to address these fluctuations by integrating a hybrid energy storage system (HESS) and developing energy management strategies (EMS). Advanced electric propulsion drive concepts are developed to improve energy efficiency, performance and system reliability by integrating HESS, developing advanced control solutions and system integration strategies, and creating tools (including models and testbed) for design and optimization of hybrid electric drive systems. A ship dynamics model which captures the underlying physical behavior of the electric ship propulsion system is developed to support control development and system optimization. To evaluate the effectiveness of the proposed control approaches, a state-of-the-art testbed has been constructed which includes a system controller, Li-Ion battery and ultra-capacitor (UC) modules, a high-speed flywheel, electric motors with their power electronic drives, DC/DC converters, and rectifiers. The feasibility and effectiveness of HESS are investigated and analyzed. Two different HESS configurations, namely battery/UC (B/UC) and battery/flywheel (B/FW), are studied and analyzed to provide insights into the advantages and limitations of each configuration. Battery usage, loss analysis, and sensitivity to battery aging are also analyzed for each configuration. In order to enable real-time application and achieve desired performance, a model predictive control (MPC) approach is developed, where a state of charge (SOC) reference of flywheel for B/FW or UC for B/UC is used to address the limitations imposed by short predictive horizons, because the benefits of flywheel and UC working around high-efficiency range are ignored by short predictive horizons. Given the multi-frequency characteristics of load fluctuations, a filter-based control strategy is developed to illustrate the importance of the coordination within the HESS. Without proper control strategies, the HESS solution could be worse than a single energy storage system solution. The proposed HESS, when introduced into an existing shipboard electrical propulsion system, will interact with the power generation systems. A model-based analysis is performed to evaluate the interactions of the multiple power sources when a hybrid energy storage system is introduced. The study has revealed undesirable interactions when the controls are not coordinated properly, and leads to the conclusion that a proper EMS is needed. Knowledge of the propulsion-load torque is essential for the proposed system-level EMS, but this load torque is immeasurable in most marine applications. To address this issue, a model-based approach is developed so that load torque estimation and prediction can be incorporated into the MPC. In order to evaluate the effectiveness of the proposed approach, an input observer with linear prediction is developed as an alternative approach to obtain the load estimation and prediction. Comparative studies are performed to illustrate the importance of load torque estimation and prediction, and demonstrate the effectiveness of the proposed approach in terms of improved efficiency, enhanced reliability, and reduced wear and tear. Finally, the real-time MPC algorithm has been implemented on a physical testbed. Three different efforts have been made to enable real-time implementation: a specially tailored problem formulation, an efficient optimization algorithm and a multi-core hardware implementation. Compared to the filter-based strategy, the proposed real-time MPC achieves superior performance, in terms of the enhanced system reliability, improved HESS efficiency, and extended battery life.
Optimal control, investment and utilization schemes for energy storage under uncertainty
NASA Astrophysics Data System (ADS)
Mirhosseini, Niloufar Sadat
Energy storage has the potential to offer new means for added flexibility on the electricity systems. This flexibility can be used in a number of ways, including adding value towards asset management, power quality and reliability, integration of renewable resources and energy bill savings for the end users. However, uncertainty about system states and volatility in system dynamics can complicate the question of when to invest in energy storage and how best to manage and utilize it. This work proposes models to address different problems associated with energy storage within a microgrid, including optimal control, investment, and utilization. Electric load, renewable resources output, storage technology cost and electricity day-ahead and spot prices are the factors that bring uncertainty to the problem. A number of analytical methodologies have been adopted to develop the aforementioned models. Model Predictive Control and discretized dynamic programming, along with a new decomposition algorithm are used to develop optimal control schemes for energy storage for two different levels of renewable penetration. Real option theory and Monte Carlo simulation, coupled with an optimal control approach, are used to obtain optimal incremental investment decisions, considering multiple sources of uncertainty. Two stage stochastic programming is used to develop a novel and holistic methodology, including utilization of energy storage within a microgrid, in order to optimally interact with energy market. Energy storage can contribute in terms of value generation and risk reduction for the microgrid. The integration of the models developed here are the basis for a framework which extends from long term investments in storage capacity to short term operational control (charge/discharge) of storage within a microgrid. In particular, the following practical goals are achieved: (i) optimal investment on storage capacity over time to maximize savings during normal and emergency operations; (ii) optimal market strategy of buy and sell over 24-hour periods; (iii) optimal storage charge and discharge in much shorter time intervals.
Robust optimization based energy dispatch in smart grids considering demand uncertainty
NASA Astrophysics Data System (ADS)
Nassourou, M.; Puig, V.; Blesa, J.
2017-01-01
In this study we discuss the application of robust optimization to the problem of economic energy dispatch in smart grids. Robust optimization based MPC strategies for tackling uncertain load demands are developed. Unexpected additive disturbances are modelled by defining an affine dependence between the control inputs and the uncertain load demands. The developed strategies were applied to a hybrid power system connected to an electrical power grid. Furthermore, to demonstrate the superiority of the standard Economic MPC over the MPC tracking, a comparison (e.g average daily cost) between the standard MPC tracking, the standard Economic MPC, and the integration of both in one-layer and two-layer approaches was carried out. The goal of this research is to design a controller based on Economic MPC strategies, that tackles uncertainties, in order to minimise economic costs and guarantee service reliability of the system.
Efficiency Management in Spaceflight Systems
NASA Technical Reports Server (NTRS)
Murphy, Karen
2016-01-01
Efficiency in spaceflight is often approached as “faster, better, cheaper – pick two”. The high levels of performance and reliability required for each mission suggest that planners can only control for two of the three. True efficiency comes by optimizing a system across all three parameters. The functional processes of spaceflight become technical requirements on three operational groups during mission planning: payload, vehicle, and launch operations. Given the interrelationships among the functions performed by the operational groups, optimizing function resources from one operational group to the others affects the efficiency of those groups and therefore the mission overall. This paper helps outline this framework and creates a context in which to understand the effects of resource trades on the overall system, improving the efficiency of the operational groups and the mission as a whole. This allows insight into and optimization of the controlling factors earlier in the mission planning stage.
In-flight performance optimization for rotorcraft with redundant controls
NASA Astrophysics Data System (ADS)
Ozdemir, Gurbuz Taha
A conventional helicopter has limits on performance at high speeds because of the limitations of main rotor, such as compressibility issues on advancing side or stall issues on retreating side. Auxiliary lift and thrust components have been suggested to improve performance of the helicopter substantially by reducing the loading on the main rotor. Such a configuration is called the compound rotorcraft. Rotor speed can also be varied to improve helicopter performance. In addition to improved performance, compound rotorcraft and variable RPM can provide a much larger degree of control redundancy. This additional redundancy gives the opportunity to further enhance performance and handling qualities. A flight control system is designed to perform in-flight optimization of redundant control effectors on a compound rotorcraft in order to minimize power required and extend range. This "Fly to Optimal" (FTO) control law is tested in simulation using the GENHEL model. A model of the UH-60, a compound version of the UH-60A with lifting wing and vectored thrust ducted propeller (VTDP), and a generic compound version of the UH-60A with lifting wing and propeller were developed and tested in simulation. A model following dynamic inversion controller is implemented for inner loop control of roll, pitch, yaw, heave, and rotor RPM. An outer loop controller regulates airspeed and flight path during optimization. A Golden Section search method was used to find optimal rotor RPM on a conventional helicopter, where the single redundant control effector is rotor RPM. The FTO builds off of the Adaptive Performance Optimization (APO) method of Gilyard by performing low frequency sweeps on a redundant control for a fixed wing aircraft. A method based on the APO method was used to optimize trim on a compound rotorcraft with several redundant control effectors. The controller can be used to optimize rotor RPM and compound control effectors through flight test or simulations in order to establish a schedule. The method has been expanded to search a two-dimensional control space. Simulation results demonstrate the ability to maximize range by optimizing stabilator deflection and an airspeed set point. Another set of results minimize power required in high speed flight by optimizing collective pitch and stabilator deflection. Results show that the control laws effectively hold the flight condition while the FTO method is effective at improving performance. Optimizations show there can be issues when the control laws regulating altitude push the collective control towards it limits. So a modification was made to the control law to regulate airspeed and altitude using propeller pitch and angle of attack while the collective is held fixed or used as an optimization variable. A dynamic trim limit avoidance algorithm is applied to avoid control saturation in other axes during optimization maneuvers. Range and power optimization FTO simulations are compared with comprehensive sweeps of trim solutions and FTO optimization shown to be effective and reliable in reaching an optimal when optimizing up to two redundant controls. Use of redundant controls is shown to be beneficial for improving performance. The search method takes almost 25 minutes of simulated flight for optimization to be complete. The optimization maneuver itself can sometimes drive the power required to high values, so a power limit is imposed to restrict the search to avoid conditions where power is more than5% higher than that of the initial trim state. With this modification, the time the optimization maneuver takes to complete is reduced down to 21 minutes without any significant change in the optimal power value.
Reliability optimization design of the gear modification coefficient based on the meshing stiffness
NASA Astrophysics Data System (ADS)
Wang, Qianqian; Wang, Hui
2018-04-01
Since the time varying meshing stiffness of gear system is the key factor affecting gear vibration, it is important to design the meshing stiffness to reduce vibration. Based on the effect of gear modification coefficient on the meshing stiffness, considering the random parameters, reliability optimization design of the gear modification is researched. The dimension reduction and point estimation method is used to estimate the moment of the limit state function, and the reliability is obtained by the forth moment method. The cooperation of the dynamic amplitude results before and after optimization indicates that the research is useful for the reduction of vibration and noise and the improvement of the reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Lian, Jianming; Sun, Yannan
Demand response is representing a significant but largely untapped resource that can greatly enhance the flexibility and reliability of power systems. In this paper, a hierarchical control framework is proposed to facilitate the integrated coordination between distributed energy resources and demand response. The proposed framework consists of coordination and device layers. In the coordination layer, various resource aggregations are optimally coordinated in a distributed manner to achieve the system-level objectives. In the device layer, individual resources are controlled in real time to follow the optimal power generation or consumption dispatched from the coordination layer. For the purpose of practical applications,more » a method is presented to determine the utility functions of controllable loads by taking into account the real-time load dynamics and the preferences of individual customers. The effectiveness of the proposed framework is validated by detailed simulation studies.« less
Dynamics and control of escape and rescue from a tumbling spacecraft
NASA Technical Reports Server (NTRS)
Kaplan, M. H.
1972-01-01
The results of 18 months of investigations are reported. A movable mass control system to convert the tumbling motion of a spacecraft into simple spin was studied along with the optimization techniques for generating displacement profiles for a tumbling asymmetrical body. Equations of motion are discussed for two asymmetrical vehicles with flexible beams and one spacecraft with flexible solar arrays. The characteristics which allow reasonable safety and reliability in bailout are also discussed.
Jiang, Yazhou; Liu, Chen -Ching; Xu, Yin
2016-04-19
The increasing importance of system reliability and resilience is changing the way distribution systems are planned and operated. To achieve a distribution system self-healing against power outages, emerging technologies and devices, such as remote-controlled switches (RCSs) and smart meters, are being deployed. The higher level of automation is transforming traditional distribution systems into the smart distribution systems (SDSs) of the future. The availability of data and remote control capability in SDSs provides distribution operators with an opportunity to optimize system operation and control. In this paper, the development of SDSs and resulting benefits of enhanced system capabilities are discussed. Amore » comprehensive survey is conducted on the state-of-the-art applications of RCSs and smart meters in SDSs. Specifically, a new method, called Temporal Causal Diagram (TCD), is used to incorporate outage notifications from smart meters for enhanced outage management. To fully utilize the fast operation of RCSs, the spanning tree search algorithm is used to develop service restoration strategies. Optimal placement of RCSs and the resulting enhancement of system reliability are discussed. Distribution system resilience with respect to extreme events is presented. Furthermore, test cases are used to demonstrate the benefit of SDSs. Active management of distributed generators (DGs) is introduced. Future research in a smart distribution environment is proposed.« less
Dual-mode ultraflow access networks: a hybrid solution for the access bottleneck
NASA Astrophysics Data System (ADS)
Kazovsky, Leonid G.; Shen, Thomas Shunrong; Dhaini, Ahmad R.; Yin, Shuang; De Leenheer, Marc; Detwiler, Benjamin A.
2013-12-01
Optical Flow Switching (OFS) is a promising solution for large Internet data transfers. In this paper, we introduce UltraFlow Access, a novel optical access network architecture that offers dual-mode service to its end-users: IP and OFS. With UltraFlow Access, we design and implement a new dual-mode control plane and a new dual-mode network stack to ensure efficient connection setup and reliable and optimal data transmission. We study the impact of the UltraFlow system's design on the network throughput. Our experimental results show that with an optimized system design, near optimal (around 10 Gb/s) OFS data throughput can be attained when the line rate is 10Gb/s.
Applicability and Limitations of Reliability Allocation Methods
NASA Technical Reports Server (NTRS)
Cruz, Jose A.
2016-01-01
Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.
Optimization of structures on the basis of fracture mechanics and reliability criteria
NASA Technical Reports Server (NTRS)
Heer, E.; Yang, J. N.
1973-01-01
Systematic summary of factors which are involved in optimization of given structural configuration is part of report resulting from study of analysis of objective function. Predicted reliability of performance of finished structure is sharply dependent upon results of coupon tests. Optimization analysis developed by study also involves expected cost of proof testing.
Zheng, Hong; Clausen, Morten Rahr; Dalsgaard, Trine Kastrup; Mortensen, Grith; Bertram, Hanne Christine
2013-08-06
We describe a time-saving protocol for the processing of LC-MS-based metabolomics data by optimizing parameter settings in XCMS and threshold settings for removing noisy and low-intensity peaks using design of experiment (DoE) approaches including Plackett-Burman design (PBD) for screening and central composite design (CCD) for optimization. A reliability index, which is based on evaluation of the linear response to a dilution series, was used as a parameter for the assessment of data quality. After identifying the significant parameters in the XCMS software by PBD, CCD was applied to determine their values by maximizing the reliability and group indexes. Optimal settings by DoE resulted in improvements of 19.4% and 54.7% in the reliability index for a standard mixture and human urine, respectively, as compared with the default setting, and a total of 38 h was required to complete the optimization. Moreover, threshold settings were optimized by using CCD for further improvement. The approach combining optimal parameter setting and the threshold method improved the reliability index about 9.5 times for a standards mixture and 14.5 times for human urine data, which required a total of 41 h. Validation results also showed improvements in the reliability index of about 5-7 times even for urine samples from different subjects. It is concluded that the proposed methodology can be used as a time-saving approach for improving the processing of LC-MS-based metabolomics data.
Optimization Testbed Cometboards Extended into Stochastic Domain
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.; Patnaik, Surya N.
2010-01-01
COMparative Evaluation Testbed of Optimization and Analysis Routines for the Design of Structures (CometBoards) is a multidisciplinary design optimization software. It was originally developed for deterministic calculation. It has now been extended into the stochastic domain for structural design problems. For deterministic problems, CometBoards is introduced through its subproblem solution strategy as well as the approximation concept in optimization. In the stochastic domain, a design is formulated as a function of the risk or reliability. Optimum solution including the weight of a structure, is also obtained as a function of reliability. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to 50 percent probability of success, or one failure in two samples. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponded to unity for reliability. Weight can be reduced to a small value for the most failure-prone design with a compromised reliability approaching zero. The stochastic design optimization (SDO) capability for an industrial problem was obtained by combining three codes: MSC/Nastran code was the deterministic analysis tool, fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life airframe component made of metallic and composite materials.
NASA Astrophysics Data System (ADS)
Karri, Naveen K.; Mo, Changki
2018-06-01
Structural reliability of thermoelectric generation (TEG) systems still remains an issue, especially for applications such as large-scale industrial or automobile exhaust heat recovery, in which TEG systems are subject to dynamic loads and thermal cycling. Traditional thermoelectric (TE) system design and optimization techniques, focused on performance alone, could result in designs that may fail during operation as the geometric requirements for optimal performance (especially the power) are often in conflict with the requirements for mechanical reliability. This study focused on reducing the thermomechanical stresses in a TEG system without compromising the optimized system performance. Finite element simulations were carried out to study the effect of TE element (leg) geometry such as leg length and cross-sectional shape under constrained material volume requirements. Results indicated that the element length has a major influence on the element stresses whereas regular cross-sectional shapes have minor influence. The impact of TE element stresses on the mechanical reliability is evaluated using brittle material failure theory based on Weibull analysis. An alternate couple configuration that relies on the industry practice of redundant element design is investigated. Results showed that the alternate configuration considerably reduced the TE element and metallization stresses, thereby enhancing the structural reliability, with little trade-off in the optimized performance. The proposed alternate configuration could serve as a potential design modification for improving the reliability of systems optimized for thermoelectric performance.
Design and control strategy for a hybrid green energy system for mobile telecommunication sites
NASA Astrophysics Data System (ADS)
Okundamiya, Michael S.; Emagbetere, Joy O.; Ogujor, Emmanuel A.
2014-07-01
The rising energy costs and carbon footprint of operating mobile telecommunication sites in the emerging world have increased research interests in green technology. The intermittent nature of most green energy sources creates the problem of designing the optimum configuration for a given location. This study presents the design analysis and control strategy for a cost effective and reliable operation of the hybrid green energy system (HGES) for GSM base transceiver station (BTS) sites in isolated regions. The design constrains the generation and distribution of power to reliably satisfy the energy demand while ensuring safe operation of the system. The overall process control applies the genetic algorithm-based technique for optimal techno-economic sizing of system's components. The process simulation utilized meteorological data for 3 locations (Abuja, Benin City and Sokoto) with varying climatic conditions in Nigeria. Simulation results presented for green GSM BTS sites are discussed and compared with existing approaches.
interest: mechanical system design sensitivity analysis and optimization of linear and nonlinear structural systems, reliability analysis and reliability-based design optimization, computational methods in committee member, ISSMO; Associate Editor, Mechanics Based Design of Structures and Machines; Associate
Oximeter reliability in a subzero environment.
Macnab, A J; Smith, M; Phillips, N; Smart, P
1996-11-01
Pulse oximeters optimize care in the pre-hospital setting. As British Columbia ambulance teams often provide care in subzero temperatures, we conducted a study to determine the reliability of 3 commercially-available portable oximeters in a subzero environment. We hypothesized that there is no significant difference between SaO2 readings obtained using a pulse oximeter at room temperature and a pulse oximeter operating at sub-zero temperatures. Subjects were stable normothermic children in intensive care on Hewlett Packard monitors (control unit) at room temperature. The test units were packed in dry ice in an insulated bin (temperature - 15 degrees C to -30 degrees C) and their sensors placed on the subjects, contralateral to the control sensors. Data were collected simultaneously from test and control units immediately following validation of control unit values by co-oximetry (blood gas). No data were unacceptable. Two units (Propaq 106EC and Nonin 8500N) functioned well to < -15 degrees C, providing data comparable to those obtained from the control unit (p < 0.001). The Siemens Micro O2 did not function at the temperatures tested. Monitor users who require equipment to function in subzero environments (military, Coast Guard, Mountain Rescue) should ensure that function is reliable, and could test units using this method.
NASA Astrophysics Data System (ADS)
Hanish Nithin, Anu; Omenzetter, Piotr
2017-04-01
Optimization of the life-cycle costs and reliability of offshore wind turbines (OWTs) is an area of immense interest due to the widespread increase in wind power generation across the world. Most of the existing studies have used structural reliability and the Bayesian pre-posterior analysis for optimization. This paper proposes an extension to the previous approaches in a framework for probabilistic optimization of the total life-cycle costs and reliability of OWTs by combining the elements of structural reliability/risk analysis (SRA), the Bayesian pre-posterior analysis with optimization through a genetic algorithm (GA). The SRA techniques are adopted to compute the probabilities of damage occurrence and failure associated with the deterioration model. The probabilities are used in the decision tree and are updated using the Bayesian analysis. The output of this framework would determine the optimal structural health monitoring and maintenance schedules to be implemented during the life span of OWTs while maintaining a trade-off between the life-cycle costs and risk of the structural failure. Numerical illustrations with a generic deterioration model for one monitoring exercise in the life cycle of a system are demonstrated. Two case scenarios, namely to build initially an expensive and robust or a cheaper but more quickly deteriorating structures and to adopt expensive monitoring system, are presented to aid in the decision-making process.
NASA Astrophysics Data System (ADS)
Wright, Robert; Abraham, Edo; Parpas, Panos; Stoianov, Ivan
2015-12-01
The operation of water distribution networks (WDN) with a dynamic topology is a recently pioneered approach for the advanced management of District Metered Areas (DMAs) that integrates novel developments in hydraulic modeling, monitoring, optimization, and control. A common practice for leakage management is the sectorization of WDNs into small zones, called DMAs, by permanently closing isolation valves. This facilitates water companies to identify bursts and estimate leakage levels by measuring the inlet flow for each DMA. However, by permanently closing valves, a number of problems have been created including reduced resilience to failure and suboptimal pressure management. By introducing a dynamic topology to these zones, these disadvantages can be eliminated while still retaining the DMA structure for leakage monitoring. In this paper, a novel optimization method based on sequential convex programming (SCP) is outlined for the control of a dynamic topology with the objective of reducing average zone pressure (AZP). A key attribute for control optimization is reliable convergence. To achieve this, the SCP method we propose guarantees that each optimization step is strictly feasible, resulting in improved convergence properties. By using a null space algorithm for hydraulic analyses, the computations required are also significantly reduced. The optimized control is actuated on a real WDN operated with a dynamic topology. This unique experimental program incorporates a number of technologies set up with the objective of investigating pioneering developments in WDN management. Preliminary results indicate AZP reductions for a dynamic topology of up to 6.5% over optimally controlled fixed topology DMAs. This article was corrected on 12 JAN 2016. See the end of the full text for details.
NASA Astrophysics Data System (ADS)
Yeh, Cheng-Ta; Lin, Yi-Kuei; Yang, Jo-Yun
2018-07-01
Network reliability is an important performance index for many real-life systems, such as electric power systems, computer systems and transportation systems. These systems can be modelled as stochastic-flow networks (SFNs) composed of arcs and nodes. Most system supervisors respect the network reliability maximization by finding the optimal multi-state resource assignment, which is one resource to each arc. However, a disaster may cause correlated failures for the assigned resources, affecting the network reliability. This article focuses on determining the optimal resource assignment with maximal network reliability for SFNs. To solve the problem, this study proposes a hybrid algorithm integrating the genetic algorithm and tabu search to determine the optimal assignment, called the hybrid GA-TS algorithm (HGTA), and integrates minimal paths, recursive sum of disjoint products and the correlated binomial distribution to calculate network reliability. Several practical numerical experiments are adopted to demonstrate that HGTA has better computational quality than several popular soft computing algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, L.; Britt, J.; Birkmire, R.
ITN Energy Systems, Inc., and Global Solar Energy, Inc., assisted by NREL's PV Manufacturing R&D program, have continued to advance CIGS production technology by developing trajectory-oriented predictive/control models, fault-tolerance control, control platform development, in-situ sensors, and process improvements. Modeling activities included developing physics-based and empirical models for CIGS and sputter-deposition processing, implementing model-based control, and applying predictive models to the construction of new evaporation sources and for control. Model-based control is enabled by implementing reduced or empirical models into a control platform. Reliability improvement activities include implementing preventive maintenance schedules; detecting failed sensors/equipment and reconfiguring to tinue processing; and systematicmore » development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which in turn have been enabled by control and reliability improvements due to this PV Manufacturing R&D program.« less
USDA-ARS?s Scientific Manuscript database
Precision irrigation management in wine grape production is hindered by the lack of a reliable method to easily quantify and monitor vine water status. Mild to moderate water stress is desirable in wine grape for controlling vine vigor and optimizing fruit yield and quality. A crop water stress ind...
JPRS Report, Science & Technology, USSR: Computers, Control Systems and Machines
1989-03-14
optimizatsii slozhnykh sistem (Coding Theory and Complex System Optimization ). Alma-Ata, Nauka Press, 1977, pp. 8-16. 11. Author’s certificate number...Interpreter Specifics [0. I. Amvrosova] ............................................. 141 Creation of Modern Computer Systems for Complex Ecological...processor can be designed to decrease degradation upon failure and assure more reliable processor operation, without requiring more complex software or
NASA Technical Reports Server (NTRS)
Hanks, G. W.; Shomber, H. A.; Dethman, H. A.; Gratzer, L. B.; Maeshiro, A.; Gangsaas, D.; Blight, J. D.; Buchan, S. M.; Crumb, C. B.; Dorwart, R. J.
1981-01-01
An active controls technology (ACT) system architecture was selected based on current technology system elements and optimal control theory was evaluated for use in analyzing and synthesizing ACT multiple control laws. The system selected employs three redundant computers to implement all of the ACT functions, four redundant smaller computers to implement the crucial pitch-augmented stability function, and a separate maintenance and display computer. The reliability objective of probability of crucial function failure of less than 1 x 10 to the -9th power per flight of 1 hr can be met with current technology system components, if the software is assumed fault free and coverage approaching 1.0 can be provided. The optimal control theory approach to ACT control law synthesis yielded comparable control law performance much more systematically and directly than the classical s-domain approach. The ACT control law performance, although somewhat degraded by the inclusion of representative nonlinearities, remained quite effective. Certain high-frequency gust-load alleviation functions may require increased surface rate capability.
Dušek, Adam; Bartoš, Luděk; Sedláček, František
2017-01-01
Litter size is one of the most reliable state-dependent life-history traits that indicate parental investment in polytocous (litter-bearing) mammals. The tendency to optimize litter size typically increases with decreasing availability of resources during the period of parental investment. To determine whether this tactic is also influenced by resource limitations prior to reproduction, we examined the effect of experimental, pre-breeding food restriction on the optimization of parental investment in lactating mice. First, we investigated the optimization of litter size in 65 experimental and 72 control families (mothers and their dependent offspring). Further, we evaluated pre-weaning offspring mortality, and the relationships between maternal and offspring condition (body weight), as well as offspring mortality, in 24 experimental and 19 control families with litter reduction (the death of one or more offspring). Assuming that pre-breeding food restriction would signal unpredictable food availability, we hypothesized that the optimization of parental investment would be more effective in the experimental rather than in the control mice. In comparison to the controls, the experimental mice produced larger litters and had a more selective (size-dependent) offspring mortality and thus lower litter reduction (the proportion of offspring deaths). Selective litter reduction helped the experimental mothers to maintain their own optimum condition, thereby improving the condition and, indirectly, the survival of their remaining offspring. Hence, pre-breeding resource limitations may have facilitated the mice to optimize their inclusive fitness. On the other hand, in the control females, the absence of environmental cues indicating a risky environment led to "maternal optimism" (overemphasizing good conditions at the time of breeding), which resulted in the production of litters of super-optimal size and consequently higher reproductive costs during lactation, including higher offspring mortality. Our study therefore provides the first evidence that pre-breeding food restriction promotes the optimization of parental investment, including offspring number and developmental success.
High power diode lasers emitting from 639 nm to 690 nm
NASA Astrophysics Data System (ADS)
Bao, L.; Grimshaw, M.; DeVito, M.; Kanskar, M.; Dong, W.; Guan, X.; Zhang, S.; Patterson, J.; Dickerson, P.; Kennedy, K.; Li, S.; Haden, J.; Martinsen, R.
2014-03-01
There is increasing market demand for high power reliable red lasers for display and cinema applications. Due to the fundamental material system limit at this wavelength range, red diode lasers have lower efficiency and are more temperature sensitive, compared to 790-980 nm diode lasers. In terms of reliability, red lasers are also more sensitive to catastrophic optical mirror damage (COMD) due to the higher photon energy. Thus developing higher power-reliable red lasers is very challenging. This paper will present nLIGHT's released red products from 639 nm to 690nm, with established high performance and long-term reliability. These single emitter diode lasers can work as stand-alone singleemitter units or efficiently integrate into our compact, passively-cooled Pearl™ fiber-coupled module architectures for higher output power and improved reliability. In order to further improve power and reliability, new chip optimizations have been focused on improving epitaxial design/growth, chip configuration/processing and optical facet passivation. Initial optimization has demonstrated promising results for 639 nm diode lasers to be reliably rated at 1.5 W and 690nm diode lasers to be reliably rated at 4.0 W. Accelerated life-test has started and further design optimization are underway.
Beating the limits with initial correlations
NASA Astrophysics Data System (ADS)
Basilewitsch, Daniel; Schmidt, Rebecca; Sugny, Dominique; Maniscalco, Sabrina; Koch, Christiane P.
2017-11-01
Fast and reliable reset of a qubit is a key prerequisite for any quantum technology. For real world open quantum systems undergoing non-Markovian dynamics, reset implies not only purification, but in particular erasure of initial correlations between qubit and environment. Here, we derive optimal reset protocols using a combination of geometric and numerical control theory. For factorizing initial states, we find a lower limit for the entropy reduction of the qubit as well as a speed limit. The time-optimal solution is determined by the maximum coupling strength. Initial correlations, remarkably, allow for faster reset and smaller errors. Entanglement is not necessary.
Optimal solutions for a bio mathematical model for the evolution of smoking habit
NASA Astrophysics Data System (ADS)
Sikander, Waseem; Khan, Umar; Ahmed, Naveed; Mohyud-Din, Syed Tauseef
In this study, we apply Variation of Parameter Method (VPM) coupled with an auxiliary parameter to obtain the approximate solutions for the epidemic model for the evolution of smoking habit in a constant population. Convergence of the developed algorithm, namely VPM with an auxiliary parameter is studied. Furthermore, a simple way is considered for obtaining an optimal value of auxiliary parameter via minimizing the total residual error over the domain of problem. Comparison of the obtained results with standard VPM shows that an auxiliary parameter is very feasible and reliable in controlling the convergence of approximate solutions.
Concept report: Microprocessor control of electrical power system
NASA Technical Reports Server (NTRS)
Perry, E.
1977-01-01
An electrical power system which uses a microprocessor for systems control and monitoring is described. The microprocessor controlled system permits real time modification of system parameters for optimizing a system configuration, especially in the event of an anomaly. By reducing the components count, the assembling and testing of the unit is simplified, and reliability is increased. A resuable modular power conversion system capable of satisfying a large percentage of space applications requirements is examined along with the programmable power processor. The PC global controller which handles systems control and external communication is analyzed, and a software description is given. A systems application summary is also included.
Reliability considerations in the placement of control system components
NASA Technical Reports Server (NTRS)
Montgomery, R. C.
1983-01-01
This paper presents a methodology, along with applications to a grid type structure, for incorporating reliability considerations in the decision for actuator placement on large space structures. The method involves the minimization of a criterion that considers mission life and the reliability of the system components. It is assumed that the actuator gains are to be readjusted following failures, but their locations cannot be changed. The goal of the design is to suppress vibrations of the grid and the integral square of the grid modal amplitudes is used as a measure of performance of the control system. When reliability of the actuators is considered, a more pertinent measure is the expected value of the integral; that is, the sum of the squares of the modal amplitudes for each possible failure state considered, multiplied by the probability that the failure state will occur. For a given set of actuator locations, the optimal criterion may be graphed as a function of the ratio of the mean time to failure of the components and the design mission life or reservicing interval. The best location of the actuators is typically different for a short mission life than for a long one.
Control and Communication for a Secure and Reconfigurable Power Distribution System
NASA Astrophysics Data System (ADS)
Giacomoni, Anthony Michael
A major transformation is taking place throughout the electric power industry to overlay existing electric infrastructure with advanced sensing, communications, and control system technologies. This transformation to a smart grid promises to enhance system efficiency, increase system reliability, support the electrification of transportation, and provide customers with greater control over their electricity consumption. Upgrading control and communication systems for the end-to-end electric power grid, however, will present many new security challenges that must be dealt with before extensive deployment and implementation of these technologies can begin. In this dissertation, a comprehensive systems approach is taken to minimize and prevent cyber-physical disturbances to electric power distribution systems using sensing, communications, and control system technologies. To accomplish this task, an intelligent distributed secure control (IDSC) architecture is presented and validated in silico for distribution systems to provide greater adaptive protection, with the ability to proactively reconfigure, and rapidly respond to disturbances. Detailed descriptions of functionalities at each layer of the architecture as well as the whole system are provided. To compare the performance of the IDSC architecture with that of other control architectures, an original simulation methodology is developed. The simulation model integrates aspects of cyber-physical security, dynamic price and demand response, sensing, communications, intermittent distributed energy resources (DERs), and dynamic optimization and reconfiguration. Applying this comprehensive systems approach, performance results for the IEEE 123 node test feeder are simulated and analyzed. The results show the trade-offs between system reliability, operational constraints, and costs for several control architectures and optimization algorithms. Additional simulation results are also provided. In particular, the advantages of an IDSC architecture are highlighted when an intermittent DER is present on the system.
Intelligent Engine Systems: Thermal Management and Advanced Cooling
NASA Technical Reports Server (NTRS)
Bergholz, Robert
2008-01-01
The objective is to provide turbine-cooling technologies to meet Propulsion 21 goals related to engine fuel burn, emissions, safety, and reliability. Specifically, the GE Aviation (GEA) Advanced Turbine Cooling and Thermal Management program seeks to develop advanced cooling and flow distribution methods for HP turbines, while achieving a substantial reduction in total cooling flow and assuring acceptable turbine component safety and reliability. Enhanced cooling techniques, such as fluidic devices, controlled-vortex cooling, and directed impingement jets, offer the opportunity to incorporate both active and passive schemes. Coolant heat transfer enhancement also can be achieved from advanced designs that incorporate multi-disciplinary optimization of external film and internal cooling passage geometry.
Eeles, Abbey L; Olsen, Joy E; Walsh, Jennifer M; McInnes, Emma K; Molesworth, Charlotte M L; Cheong, Jeanie L Y; Doyle, Lex W; Spittle, Alicia J
2017-02-01
Neurobehavioral assessments provide insight into the functional integrity of the developing brain and help guide early intervention for preterm (<37 weeks' gestation) infants. In the context of shorter hospital stays, clinicians often need to assess preterm infants prior to term equivalent age. Few neurobehavioral assessments used in the preterm period have established interrater reliability. To evaluate the interrater reliability of the Hammersmith Neonatal Neurological Examination (HNNE) and the NICU Network Neurobehavioral Scale (NNNS), when used both preterm and at term (>36 weeks). Thirty-five preterm infants and 11 term controls were recruited. Five assessors double-scored the HNNE and NNNS administered either preterm or at term. A one-way random effects, absolute, single-measures interclass correlation coefficient (ICC) was calculated to determine interrater reliability. Interrater reliability for the HNNE was excellent (ICC > 0.74) for optimality scores, and good (ICC 0.60-0.74) to excellent for subtotal scores, except for 'Tone Patterns' (ICC 0.54). On the NNNS, interrater reliability was predominantly excellent for all items. Interrater agreement was generally excellent at both time points. Overall, the HNNE and NNNS neurobehavioral assessments demonstrated mostly excellent interrater reliability when used prior to term and at term.
A Delay-Aware and Reliable Data Aggregation for Cyber-Physical Sensing
Zhang, Jinhuan; Long, Jun; Zhang, Chengyuan; Zhao, Guihu
2017-01-01
Physical information sensed by various sensors in a cyber-physical system should be collected for further operation. In many applications, data aggregation should take reliability and delay into consideration. To address these problems, a novel Tiered Structure Routing-based Delay-Aware and Reliable Data Aggregation scheme named TSR-DARDA for spherical physical objects is proposed. By dividing the spherical network constructed by dispersed sensor nodes into circular tiers with specifically designed widths and cells, TSTR-DARDA tries to enable as many nodes as possible to transmit data simultaneously. In order to ensure transmission reliability, lost packets are retransmitted. Moreover, to minimize the latency while maintaining reliability for data collection, in-network aggregation and broadcast techniques are adopted to deal with the transmission between data collecting nodes in the outer layer and their parent data collecting nodes in the inner layer. Thus, the optimization problem is transformed to minimize the delay under reliability constraints by controlling the system parameters. To demonstrate the effectiveness of the proposed scheme, we have conducted extensive theoretical analysis and comparisons to evaluate the performance of TSR-DARDA. The analysis and simulations show that TSR-DARDA leads to lower delay with reliability satisfaction. PMID:28218668
NASA Astrophysics Data System (ADS)
Yu, Zheng
2002-08-01
Facing the new demands of the optical fiber communications market, almost all the performance and reliability of optical network system are dependent on the qualification of the fiber optics components. So, how to comply with the system requirements, the Telcordia / Bellcore reliability and high-power testing has become the key issue for the fiber optics components manufacturers. The qualification of Telcordia / Bellcore reliability or high-power testing is a crucial issue for the manufacturers. It is relating to who is the outstanding one in the intense competition market. These testing also need maintenances and optimizations. Now, work on the reliability and high-power testing have become the new demands in the market. The way is needed to get the 'Triple-Win' goal expected by the component-makers, the reliability-testers and the system-users. To those who are meeting practical problems for the testing, there are following seven topics that deal with how to shoot the common mistakes to perform qualify reliability and high-power testing: ¸ Qualification maintenance requirements for the reliability testing ¸ Lots control for preparing the reliability testing ¸ Sampling select per the reliability testing ¸ Interim measurements during the reliability testing ¸ Basic referencing factors relating to the high-power testing ¸ Necessity of re-qualification testing for the changing of producing ¸ Understanding the similarity for product family by the definitions
Boycheva, Elina; Contador, Israel; Fernández-Calvo, Bernardino; Ramos-Campos, Francisco; Puertas-Martín, Verónica; Villarejo-Galende, Alberto; Bermejo-Pareja, Félix
2018-06-01
We aimed to analyse the clinical utility of the Mattis Dementia Rating Scale (MDRS-2) for early detection of Alzheimer's disease (AD) and amnestic mild cognitive impairment (MCI) in a sample of Spanish older adults. A total of 125 participants (age = 75.12 ± 6.83, years of education =7.08 ± 3.57) were classified in three diagnostic groups: 45 patients with mild AD, 37 with amnestic MCI-single and multiple domain and 43 cognitively healthy controls (HCs). Reliability, criterion validity and diagnostic accuracy of the MDRS-2 (total and subscales) were analysed. The MDRS-2 scores, adjusted by socio-demographic characteristics, were calculated through hierarchical multiple regression analysis. The global scale had adequate reliability (α = 0.736) and good criterion validity (r = 0.760, p < .001) with the Mini-Mental State Examination. The optimal cut-off point between AD patients and HCs was 124 (sensitivity [Se] = 97% and specificity [Sp] = 95%), whereas 131 (Se = 89%, Sp = 81%) was the optimal cut-off point between MCI and HCs. An optimal cut-off point of 123 had good Se (0.97), but poor Sp (0.56) to differentiate AD and MCI groups. The Memory and Initiation/Perseveration subscales had the highest discriminative capacity between the groups. The MDRS-2 is a reliable and valid instrument for the assessment of cognitive impairment in Spanish older adults. In particular, optimal capacity emerged for the detection of early AD and MCI. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Streuber, Gregg Mitchell
Environmental and economic factors motivate the pursuit of more fuel-efficient aircraft designs. Aerodynamic shape optimization is a powerful tool in this effort, but is hampered by the presence of multimodality in many design spaces. Gradient-based multistart optimization uses a sampling algorithm and multiple parallel optimizations to reliably apply fast gradient-based optimization to moderately multimodal problems. Ensuring that the sampled geometries remain physically realizable requires manually developing specialized linear constraints for each class of problem. Utilizing free-form deformation geometry control allows these linear constraints to be written in a geometry-independent fashion, greatly easing the process of applying the algorithm to new problems. This algorithm was used to assess the presence of multimodality when optimizing a wing in subsonic and transonic flows, under inviscid and viscous conditions, and a blended wing-body under transonic, viscous conditions. Multimodality was present in every wing case, while the blended wing-body was found to be generally unimodal.
Design optimization for cost and quality: The robust design approach
NASA Technical Reports Server (NTRS)
Unal, Resit
1990-01-01
Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.
Predictive Scheduling for Electric Vehicles Considering Uncertainty of Load and User Behaviors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Bin; Huang, Rui; Wang, Yubo
2016-05-02
Un-coordinated Electric Vehicle (EV) charging can create unexpected load in local distribution grid, which may degrade the power quality and system reliability. The uncertainty of EV load, user behaviors and other baseload in distribution grid, is one of challenges that impedes optimal control for EV charging problem. Previous researches did not fully solve this problem due to lack of real-world EV charging data and proper stochastic model to describe these behaviors. In this paper, we propose a new predictive EV scheduling algorithm (PESA) inspired by Model Predictive Control (MPC), which includes a dynamic load estimation module and a predictive optimizationmore » module. The user-related EV load and base load are dynamically estimated based on the historical data. At each time interval, the predictive optimization program will be computed for optimal schedules given the estimated parameters. Only the first element from the algorithm outputs will be implemented according to MPC paradigm. Current-multiplexing function in each Electric Vehicle Supply Equipment (EVSE) is considered and accordingly a virtual load is modeled to handle the uncertainties of future EV energy demands. This system is validated by the real-world EV charging data collected on UCLA campus and the experimental results indicate that our proposed model not only reduces load variation up to 40% but also maintains a high level of robustness. Finally, IEC 61850 standard is utilized to standardize the data models involved, which brings significance to more reliable and large-scale implementation.« less
Impacts of Intelligent Automated Quality Control on a Small Animal APD-Based Digital PET Scanner
NASA Astrophysics Data System (ADS)
Charest, Jonathan; Beaudoin, Jean-François; Bergeron, Mélanie; Cadorette, Jules; Arpin, Louis; Lecomte, Roger; Brunet, Charles-Antoine; Fontaine, Réjean
2016-10-01
Stable system performance is mandatory to warrant the accuracy and reliability of biological results relying on small animal positron emission tomography (PET) imaging studies. This simple requirement sets the ground for imposing routine quality control (QC) procedures to keep PET scanners at a reliable optimal performance level. However, such procedures can become burdensome to implement for scanner operators, especially taking into account the increasing number of data acquisition channels in newer generation PET scanners. In systems using pixel detectors to achieve enhanced spatial resolution and contrast-to-noise ratio (CNR), the QC workload rapidly increases to unmanageable levels due to the number of independent channels involved. An artificial intelligence based QC system, referred to as Scanner Intelligent Diagnosis for Optimal Performance (SIDOP), was proposed to help reducing the QC workload by performing automatic channel fault detection and diagnosis. SIDOP consists of four high-level modules that employ machine learning methods to perform their tasks: Parameter Extraction, Channel Fault Detection, Fault Prioritization, and Fault Diagnosis. Ultimately, SIDOP submits a prioritized faulty channel list to the operator and proposes actions to correct them. To validate that SIDOP can perform QC procedures adequately, it was deployed on a LabPET™ scanner and multiple performance metrics were extracted. After multiple corrections on sub-optimal scanner settings, a 8.5% (with a 95% confidence interval (CI) of [7.6, 9.3]) improvement in the CNR, a 17.0% (CI: [15.3, 18.7]) decrease of the uniformity percentage standard deviation, and a 6.8% gain in global sensitivity were observed. These results confirm that SIDOP can indeed be of assistance in performing QC procedures and restore performance to optimal figures.
Material control and accountancy at EDF PWR plants; GCN: Gestion du Combustible Nucleaire
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Cormis, F.
1991-01-01
The paper describes the comprehensive system which is developed and implemented at Electricite de France to provide a single reliable nuclear material control and accounting system for all nuclear plants. This software aims at several objectives among which are: the control and the accountancy of nuclear material at the plant, the optimization of the consistency of data by minimizing the possibility of transcription errors, the fulfillment of the statutory requirements by automatic transfer of reports to national and international safeguards authorities, the servicing of other EDF users of nuclear material data for technical or commercial purposes.
The impact of symptom stability on time frame and recall reliability in CFS.
Evans, Meredyth; Jason, Leonard A
This study is an investigation of the potential impact of perceived symptom stability on the recall reliability of symptom severity and frequency as reported by individuals with chronic fatigue syndrome (CFS). Symptoms were recalled using three different recall timeframes (the past week, the past month, and the past six months) and at two assessment points (with one week in between each assessment). Participants were 51 adults (45 women and 6 men), between the ages of 29 and 66 with a current diagnosis of CFS. Multilevel Model (MLM) Analyses were used to determine the optimal recall timeframe (in terms of test-retest reliability) for reporting symptoms perceived as variable and as stable over time. Headaches were recalled more reliably when they were reported as stable over time. Furthermore, the optimal timeframe in terms of test-retest reliability for stable symptoms was highly uniform, such that all Fukuda 1 CFS symptoms were more reliably recalled at the six month timeframe. Furthermore, the optimal timeframe for CFS symptoms perceived as variable, differed across symptoms. Symptom stability and recall timeframe are important to consider in order to improve the accuracy and reliability of the current methods for diagnosing this illness.
Probabilistic Design of a Plate-Like Wing to Meet Flutter and Strength Requirements
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson; Krishnamurthy, T.; Mason, Brian H.; Smith, Steven A.; Naser, Ahmad S.
2002-01-01
An approach is presented for carrying out reliability-based design of a metallic, plate-like wing to meet strength and flutter requirements that are given in terms of risk/reliability. The design problem is to determine the thickness distribution such that wing weight is a minimum and the probability of failure is less than a specified value. Failure is assumed to occur if either the flutter speed is less than a specified allowable or the stress caused by a pressure loading is greater than a specified allowable. Four uncertain quantities are considered: wing thickness, calculated flutter speed, allowable stress, and magnitude of a uniform pressure load. The reliability-based design optimization approach described herein starts with a design obtained using conventional deterministic design optimization with margins on the allowables. Reliability is calculated using Monte Carlo simulation with response surfaces that provide values of stresses and flutter speed. During the reliability-based design optimization, the response surfaces and move limits are coordinated to ensure accuracy of the response surfaces. Studies carried out in the paper show the relationship between reliability and weight and indicate that, for the design problem considered, increases in reliability can be obtained with modest increases in weight.
NASA Astrophysics Data System (ADS)
Chung, Pil Seung; Song, Wonyup; Biegler, Lorenz T.; Jhon, Myung S.
2017-05-01
During the operation of hard disk drive (HDD), the perfluoropolyether (PFPE) lubricant experiences elastic or viscous shear/elongation deformations, which affect the performance and reliability of the HDD. Therefore, the viscoelastic responses of PFPE could provide a finger print analysis in designing optimal molecular architecture of lubricants to control the tribological phenomena. In this paper, we examine the rheological responses of PFPEs including storage (elastic) and loss (viscous) moduli (G' and G″) by monitoring the time-dependent-stress-strain relationship via non-equilibrium molecular dynamics simulations. We analyzed the rheological responses by using Cox-Merz rule, and investigated the molecular structural and thermal effects on the solid-like and liquid-like behaviors of PFPEs. The temperature dependence of the endgroup agglomeration phenomena was examined, where the functional endgroups are decoupled as the temperature increases. By analyzing the relaxation processes, the molecular rheological studies will provide the optimal lubricant selection criteria to enhance the HDD performance and reliability for the heat-assisted magnetic recording applications.
Regulation of transmission line capacity and reliability in electric networks
NASA Astrophysics Data System (ADS)
Celebi, Metin
This thesis is composed of two essays that analyze the incentives and optimal regulation of a monopolist line owner in providing capacity and reliability. Similar analyses in the economic literature resulted in under-investment by an unregulated line owner when line reliability was treated as an exogenous variable. However, reliability should be chosen on the basis of economic principles as well, taking into account not only engineering principles but also the preferences of electricity users. When reliability is treated as a choice variable, both over- and under-investment by the line owner becomes possible. The result depends on the cross-cost elasticity of line construction and on the interval in which the optimal choices of capacity take place. We present some sufficient conditions that lead to definite results about the incentives of the line owner. We also characterize the optimal regulation of the line owner under incomplete information. Our analysis shows that the existence of a line is justified for the social planner when the reliability of other lines on the network is not too high, or when the marginal cost of generation at the expensive generating plant is high. The expectation of higher demand in the future makes the regulator less likely to build the line if it will be congested and reliability of other lines is high enough. It is always optimal to have a congested line under complete information, but not necessarily under incomplete information.
NASA Astrophysics Data System (ADS)
Lu, M.; Lall, U.
2013-12-01
In order to mitigate the impacts of climate change, proactive management strategies to operate reservoirs and dams are needed. A multi-time scale climate informed stochastic model is developed to optimize the operations for a multi-purpose single reservoir by simulating decadal, interannual, seasonal and sub-seasonal variability. We apply the model to a setting motivated by the largest multi-purpose dam in N. India, the Bhakhra reservoir on the Sutlej River, a tributary of the Indus. This leads to a focus on timing and amplitude of the flows for the monsoon and snowmelt periods. The flow simulations are constrained by multiple sources of historical data and GCM future projections, that are being developed through a NSF funded project titled 'Decadal Prediction and Stochastic Simulation of Hydroclimate Over Monsoon Asia'. The model presented is a multilevel, nonlinear programming model that aims to optimize the reservoir operating policy on a decadal horizon and the operation strategy on an updated annual basis. The model is hierarchical, in terms of having a structure that two optimization models designated for different time scales are nested as a matryoshka doll. The two optimization models have similar mathematical formulations with some modifications to meet the constraints within that time frame. The first level of the model is designated to provide optimization solution for policy makers to determine contracted annual releases to different uses with a prescribed reliability; the second level is a within-the-period (e.g., year) operation optimization scheme that allocates the contracted annual releases on a subperiod (e.g. monthly) basis, with additional benefit for extra release and penalty for failure. The model maximizes the net benefit of irrigation, hydropower generation and flood control in each of the periods. The model design thus facilitates the consistent application of weather and climate forecasts to improve operations of reservoir systems. The decadal flow simulations are re-initialized every year with updated climate projections to improve the reliability of the operation rules for the next year, within which the seasonal operation strategies are nested. The multi-level structure can be repeated for monthly operation with weekly subperiods to take advantage of evolving weather forecasts and seasonal climate forecasts. As a result of the hierarchical structure, sub-seasonal even weather time scale updates and adjustment can be achieved. Given an ensemble of these scenarios, the McISH reservoir simulation-optimization model is able to derive the desired reservoir storage levels, including minimum and maximum, as a function of calendar date, and the associated release patterns. The multi-time scale approach allows adaptive management of water supplies acknowledging the changing risks, meeting both the objectives over the decade in expected value and controlling the near term and planning period risk through probabilistic reliability constraints. For the applications presented, the target season is the monsoon season from June to September. The model also includes a monthly flood volume forecast model, based on a Copula density fit to the monthly flow and the flood volume flow. This is used to guide dynamic allocation of the flood control volume given the forecasts.
Pressure-Aware Control Layer Optimization for Flow-Based Microfluidic Biochips.
Wang, Qin; Xu, Yue; Zuo, Shiliang; Yao, Hailong; Ho, Tsung-Yi; Li, Bing; Schlichtmann, Ulf; Cai, Yici
2017-12-01
Flow-based microfluidic biochips are attracting increasing attention with successful biomedical applications. One critical issue with flow-based microfluidic biochips is the large number of microvalves that require peripheral control pins. Even using the broadcasting addressing scheme, i.e., one control pin controls multiple microvalves simultaneously, thousands of microvalves would still require hundreds of control prins, which is unrealistic. To address this critical challenge in control scalability, the control-layer multiplexer is introduced to effectively reduce the number of control pins into log scale of the number of microvalves. There are two practical design issues with the control-layer multiplexer: (1) the reliability issue caused by the frequent control-valve switching, and (2) the pressure degradation problem caused by the control-valve switching without pressure refreshing from the pressure source. This paper addresses these two design issues by the proposed Hamming-distance-based switching sequence optimization method and the XOR-based pressure refreshing method. Simulation results demonstrate the effectiveness and efficiency of the proposed methods with an average 77.2% (maximum 89.6%) improvement in total pressure refreshing cost, and an average 88.5% (maximum 90.0%) improvement in pressure deviation.
NASA Astrophysics Data System (ADS)
Cheng, Xiang-Qin; Qu, Jing-Yuan; Yan, Zhe-Ping; Bian, Xin-Qian
2010-03-01
In order to improve the security and reliability for autonomous underwater vehicle (AUV) navigation, an H∞ robust fault-tolerant controller was designed after analyzing variations in state-feedback gain. Operating conditions and the design method were then analyzed so that the control problem could be expressed as a mathematical optimization problem. This permitted the use of linear matrix inequalities (LMI) to solve for the H∞ controller for the system. When considering different actuator failures, these conditions were then also mathematically expressed, allowing the H∞ robust controller to solve for these events and thus be fault-tolerant. Finally, simulation results showed that the H∞ robust fault-tolerant controller could provide precise AUV navigation control with strong robustness.
Measurement and Reliability of Response Inhibition
Congdon, Eliza; Mumford, Jeanette A.; Cohen, Jessica R.; Galvan, Adriana; Canli, Turhan; Poldrack, Russell A.
2012-01-01
Response inhibition plays a critical role in adaptive functioning and can be assessed with the Stop-signal task, which requires participants to suppress prepotent motor responses. Evidence suggests that this ability to inhibit a prepotent motor response (reflected as Stop-signal reaction time (SSRT)) is a quantitative and heritable measure of interindividual variation in brain function. Although attention has been given to the optimal method of SSRT estimation, and initial evidence exists in support of its reliability, there is still variability in how Stop-signal task data are treated across samples. In order to examine this issue, we pooled data across three separate studies and examined the influence of multiple SSRT calculation methods and outlier calling on reliability (using Intra-class correlation). Our results suggest that an approach which uses the average of all available sessions, all trials of each session, and excludes outliers based on predetermined lenient criteria yields reliable SSRT estimates, while not excluding too many participants. Our findings further support the reliability of SSRT, which is commonly used as an index of inhibitory control, and provide support for its continued use as a neurocognitive phenotype. PMID:22363308
Enhanced optical design by distortion control
NASA Astrophysics Data System (ADS)
Thibault, Simon; Gauvin, Jonny; Doucet, Michel; Wang, Min
2005-09-01
The control of optical distortion is useful for the design of a variety of optical system. The most popular is the F-theta lens used in laser scanning system to produce a constant scan velocity across the image plane. Many authors have designed during the last 20 years distortion control corrector. Today, many challenging digital imaging system can use distortion the enhanced their imaging capability. A well know example is a reversed telephoto type, if the barrel distortion is increased instead of being corrected; the result is a so-called Fish-eye lens. However, if we control the barrel distortion instead of only increasing it, the resulting system can have enhanced imaging capability. This paper will present some lens design and real system examples that clearly demonstrate how the distortion control can improve the system performances such as resolution. We present innovative optical system which increases the resolution in the field of view of interest to meet the needs of specific applications. One critical issue when we designed using distortion is the optimization management. Like most challenging lens design, the automatic optimization is less reliable. Proper management keeps the lens design within the correct range, which is critical for optimal performance (size, cost, manufacturability). Many lens design presented tailor a custom merit function and approach.
Improving fMRI reliability in presurgical mapping for brain tumours.
Stevens, M Tynan R; Clarke, David B; Stroink, Gerhard; Beyea, Steven D; D'Arcy, Ryan Cn
2016-03-01
Functional MRI (fMRI) is becoming increasingly integrated into clinical practice for presurgical mapping. Current efforts are focused on validating data quality, with reliability being a major factor. In this paper, we demonstrate the utility of a recently developed approach that uses receiver operating characteristic-reliability (ROC-r) to: (1) identify reliable versus unreliable data sets; (2) automatically select processing options to enhance data quality; and (3) automatically select individualised thresholds for activation maps. Presurgical fMRI was conducted in 16 patients undergoing surgical treatment for brain tumours. Within-session test-retest fMRI was conducted, and ROC-reliability of the patient group was compared to a previous healthy control cohort. Individually optimised preprocessing pipelines were determined to improve reliability. Spatial correspondence was assessed by comparing the fMRI results to intraoperative cortical stimulation mapping, in terms of the distance to the nearest active fMRI voxel. The average ROC-r reliability for the patients was 0.58±0.03, as compared to 0.72±0.02 in healthy controls. For the patient group, this increased significantly to 0.65±0.02 by adopting optimised preprocessing pipelines. Co-localisation of the fMRI maps with cortical stimulation was significantly better for more reliable versus less reliable data sets (8.3±0.9 vs 29±3 mm, respectively). We demonstrated ROC-r analysis for identifying reliable fMRI data sets, choosing optimal postprocessing pipelines, and selecting patient-specific thresholds. Data sets with higher reliability also showed closer spatial correspondence to cortical stimulation. ROC-r can thus identify poor fMRI data at time of scanning, allowing for repeat scans when necessary. ROC-r analysis provides optimised and automated fMRI processing for improved presurgical mapping. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Morea, Jessica M; Friend, Ronald; Bennett, Robert M
2008-12-01
Illness self-concept (ISC), or the extent to which individuals are consumed by their illness, was theoretically described and evaluated with the Illness Self-Concept Scale (ISCS), a new 23-item scale, to predict adjustment in fibromyalgia. To establish convergent and discriminant validity, illness self-concept was compared to self-esteem and optimism in predicting health status, illness intrusiveness, depression, and life satisfaction. The ISCS demonstrated good reliability (alpha = .94; test-retest r = .80) and was a strong predictor of outcomes, even after controlling for optimism or self-esteem. The ISCS predicted unique variance in health-related outcomes; optimism and self-esteem did not, providing construct validation. Illness self-concept may play a significant role in coping with fibromyalgia and may prove useful in the evaluation of other chronic illnesses. (c) 2008 Wiley Periodicals, Inc.
Modeling of liquid flow in surface discontinuities
NASA Astrophysics Data System (ADS)
Lobanova, I. S.; Meshcheryakov, V. A.; Kalinichenko, A. N.
2018-01-01
Polymer composite and metallic materials have found wide application in various industries such as aviation, rocket, car manufacturing, ship manufacturing, etc. Many design elements need permanent quality control. Ensuring high quality and reliability of products is impossible without effective nondestructive testing methods. One of these methods is penetrant testing using penetrating substances based on liquid penetration into defect cavities. In this paper, we propose a model of liquid flow to determine the rates of filling the defect cavities with various materials and, based on this, to choose optimal control modes.
NASA Astrophysics Data System (ADS)
Teves, André da Costa; Lima, Cícero Ribeiro de; Passaro, Angelo; Silva, Emílio Carlos Nelli
2017-03-01
Electrostatic or capacitive accelerometers are among the highest volume microelectromechanical systems (MEMS) products nowadays. The design of such devices is a complex task, since they depend on many performance requirements, which are often conflicting. Therefore, optimization techniques are often used in the design stage of these MEMS devices. Because of problems with reliability, the technology of MEMS is not yet well established. Thus, in this work, size optimization is combined with the reliability-based design optimization (RBDO) method to improve the performance of accelerometers. To account for uncertainties in the dimensions and material properties of these devices, the first order reliability method is applied to calculate the probabilities involved in the RBDO formulation. Practical examples of bulk-type capacitive accelerometer designs are presented and discussed to evaluate the potential of the implemented RBDO solver.
Reliability of Wearable Inertial Measurement Units to Measure Physical Activity in Team Handball.
Luteberget, Live S; Holme, Benjamin R; Spencer, Matt
2018-04-01
To assess the reliability and sensitivity of commercially available inertial measurement units to measure physical activity in team handball. Twenty-two handball players were instrumented with 2 inertial measurement units (OptimEye S5; Catapult Sports, Melbourne, Australia) taped together. They participated in either a laboratory assessment (n = 10) consisting of 7 team handball-specific tasks or field assessment (n = 12) conducted in 12 training sessions. Variables, including PlayerLoad™ and inertial movement analysis (IMA) magnitude and counts, were extracted from the manufacturers' software. IMA counts were divided into intensity bands of low (1.5-2.5 m·s -1 ), medium (2.5-3.5 m·s -1 ), high (>3.5 m·s -1 ), medium/high (>2.5 m·s -1 ), and total (>1.5 m·s -1 ). Reliability between devices and sensitivity was established using coefficient of variation (CV) and smallest worthwhile difference (SWD). Laboratory assessment: IMA magnitude showed a good reliability (CV = 3.1%) in well-controlled tasks. CV increased (4.4-6.7%) in more-complex tasks. Field assessment: Total IMA counts (CV = 1.8% and SWD = 2.5%), PlayerLoad (CV = 0.9% and SWD = 2.1%), and their associated variables (CV = 0.4-1.7%) showed a good reliability, well below the SWD. However, the CV of IMA increased when categorized into intensity bands (2.9-5.6%). The reliability of IMA counts was good when data were displayed as total, high, or medium/high counts. A good reliability for PlayerLoad and associated variables was evident. The CV of the previously mentioned variables was well below the SWD, suggesting that OptimEye's inertial measurement unit and its software are sensitive for use in team handball.
Barabas, Sascha; Spindler, Theresa; Kiener, Richard; Tonar, Charlotte; Lugner, Tamara; Batzilla, Julia; Bendfeldt, Hanna; Rascle, Anne; Asbach, Benedikt; Wagner, Ralf; Deml, Ludwig
2017-03-07
In healthy individuals, Cytomegalovirus (CMV) infection is efficiently controlled by CMV-specific cell-mediated immunity (CMI). Functional impairment of CMI in immunocompromized individuals however can lead to uncontrolled CMV replication and severe clinical complications. Close monitoring of CMV-specific CMI is therefore clinically relevant and might allow a reliable prognosis of CMV disease as well as assist personalized therapeutic decisions. Objective of this work was the optimization and technical validation of an IFN-γ ELISpot assay for a standardized, sensitive and reliable quantification of CMV-reactive effector cells. T-activated® immunodominant CMV IE-1 and pp65 proteins were used as stimulants. All basic assay parameters and reagents were tested and optimized to establish a user-friendly protocol and maximize the signal-to-noise ratio of the ELISpot assay. Optimized and standardized ELISpot revealed low intra-assay, inter-assay and inter-operator variability (coefficient of variation CV below 22%) and CV inter-site was lower than 40%. Good assay linearity was obtained between 6 × 10 4 and 2 × 10 5 PBMC per well upon stimulation with T-activated® IE-1 (R 2 = 0.97) and pp65 (R 2 = 0.99) antigens. Remarkably, stimulation of peripheral blood mononuclear cells (PBMC) with T-activated® IE-1 and pp65 proteins resulted in the activation of a broad range of CMV-reactive effector cells, including CD3 + CD4 + (Th), CD3 + CD8 + (CTL), CD3 - CD56 + (NK) and CD3 + CD56 + (NKT-like) cells. Accordingly, the optimized IFN-γ ELISpot assay revealed very high sensitivity (97%) in a cohort of 45 healthy donors, of which 32 were CMV IgG-seropositive. The combined use of T-activated® IE-1 and pp65 proteins for the stimulation of PBMC with the optimized IFN-γ ELISpot assay represents a highly standardized, valuable tool to monitor the functionality of CMV-specific CMI with great sensitivity and reliability.
Basic Principles of Electrical Network Reliability Optimization in Liberalised Electricity Market
NASA Astrophysics Data System (ADS)
Oleinikova, I.; Krishans, Z.; Mutule, A.
2008-01-01
The authors propose to select long-term solutions to the reliability problems of electrical networks in the stage of development planning. The guide lines or basic principles of such optimization are: 1) its dynamical nature; 2) development sustainability; 3) integrated solution of the problems of network development and electricity supply reliability; 4) consideration of information uncertainty; 5) concurrent consideration of the network and generation development problems; 6) application of specialized information technologies; 7) definition of requirements for independent electricity producers. In the article, the major aspects of liberalized electricity market, its functions and tasks are reviewed, with emphasis placed on the optimization of electrical network development as a significant component of sustainable management of power systems.
Proposal and validation of a clinical trunk control test in individuals with spinal cord injury.
Quinzaños, J; Villa, A R; Flores, A A; Pérez, R
2014-06-01
One of the problems that arise in spinal cord injury (SCI) is alteration in trunk control. Despite the need for standardized scales, these do not exist for evaluating trunk control in SCI. To propose and validate a trunk control test in individuals with SCI. National Institute of Rehabilitation, Mexico. The test was developed and later evaluated for reliability and criteria, content, and construct validity. We carried out 531 tests on 177 patients and found high inter- and intra-rater reliability. In terms of criterion validity, analysis of variance demonstrated a statistically significant difference in the test score of patients with adequate or inadequate trunk control according to the assessment of a group of experts. A receiver operating characteristic curve was plotted for optimizing the instrument's cutoff point, which was determined at 13 points, with a sensitivity of 98% and a specificity of 92.2%. With regard to construct validity, the correlation between the proposed test and the spinal cord independence measure (SCIM) was 0.873 (P=0.001) and that with the evolution time was 0.437 (P=0.001). For testing the hypothesis with qualitative variables, the Kruskal-Wallis test was performed, which resulted in a statistically significant difference between the scores in the proposed scale of each group defined by these variables. It was proven experimentally that the proposed trunk control test is valid and reliable. Furthermore, the test can be used for all patients with SCI despite the type and level of injury.
NASA Astrophysics Data System (ADS)
Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo
2018-05-01
The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.
Power Hardware-in-the-Loop (PHIL) Testing Facility for Distributed Energy Storage (Poster)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neubauer.J.; Lundstrom, B.; Simpson, M.
2014-06-01
The growing deployment of distributed, variable generation and evolving end-user load profiles presents a unique set of challenges to grid operators responsible for providing reliable and high quality electrical service. Mass deployment of distributed energy storage systems (DESS) has the potential to solve many of the associated integration issues while offering reliability and energy security benefits other solutions cannot. However, tools to develop, optimize, and validate DESS control strategies and hardware are in short supply. To fill this gap, NREL has constructed a power hardware-in-the-loop (PHIL) test facility that connects DESS, grid simulator, and load bank hardware to a distributionmore » feeder simulation.« less
Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements
NASA Technical Reports Server (NTRS)
Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.
1988-01-01
The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.
Operating Room of the Future: Advanced Technologies in Safe and Efficient Operating Rooms
2008-10-01
fit” or compatibility with different tasks. Ideally, the optimal match between tasks and well-designed display alternatives will be self -apparent...hierarchical display environment. The FARO robot arm is used as an accurate and reliable tracker to control a virtual camera. The virtual camera pose is...in learning outcomes due to self -feedback, improvements in learning outcomes due to instructor feedback and synchronous versus asynchronous
Routing UAVs to Co-Optimize Mission Effectiveness and Network Performance with Dynamic Programming
2011-03-01
Heuristics on Hexagonal Connected Dominating Sets to Model Routing Dissemination," in Communication Theory, Reliability, and Quality of Service (CTRQ...24] Matthew Capt. USAF Compton, Improving the Quality of Service and Security of Military Networks with a Network Tasking Order Process, 2010. [25...Wesley, 2006. [32] James Haught, "Adaptive Quality of Service Engine with Dynamic Queue Control," Air Force Institute of Technology, Wright
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kisner, R.; Melin, A.; Burress, T.
The objective of this project is to demonstrate improved reliability and increased performance made possible by deeply embedding instrumentation and controls (I&C) in nuclear power plant (NPP) components and systems. The project is employing a highly instrumented canned rotor, magnetic bearing, fluoride salt pump as its I&C technology demonstration platform. I&C is intimately part of the basic millisecond-by-millisecond functioning of the system; treating I&C as an integral part of the system design is innovative and will allow significant improvement in capabilities and performance. As systems become more complex and greater performance is required, traditional I&C design techniques become inadequate andmore » more advanced I&C needs to be applied. New I&C techniques enable optimal and reliable performance and tolerance of noise and uncertainties in the system rather than merely monitoring quasistable performance. Traditionally, I&C has been incorporated in NPP components after the design is nearly complete; adequate performance was obtained through over-design. By incorporating I&C at the beginning of the design phase, the control system can provide superior performance and reliability and enable designs that are otherwise impossible. This report describes the progress and status of the project and provides a conceptual design overview for the platform to demonstrate the performance and reliability improvements enabled by advanced embedded I&C.« less
Reliability Constrained Priority Load Shedding for Aerospace Power System Automation
NASA Technical Reports Server (NTRS)
Momoh, James A.; Zhu, Jizhong; Kaddah, Sahar S.; Dolce, James L. (Technical Monitor)
2000-01-01
The need for improving load shedding on board the space station is one of the goals of aerospace power system automation. To accelerate the optimum load-shedding functions, several constraints must be involved. These constraints include congestion margin determined by weighted probability contingency, component/system reliability index, generation rescheduling. The impact of different faults and indices for computing reliability were defined before optimization. The optimum load schedule is done based on priority, value and location of loads. An optimization strategy capable of handling discrete decision making, such as Everett optimization, is proposed. We extended Everett method to handle expected congestion margin and reliability index as constraints. To make it effective for real time load dispatch process, a rule-based scheme is presented in the optimization method. It assists in selecting which feeder load to be shed, the location of the load, the value, priority of the load and cost benefit analysis of the load profile is included in the scheme. The scheme is tested using a benchmark NASA system consisting of generators, loads and network.
Shukla, Chinmay A
2017-01-01
The implementation of automation in the multistep flow synthesis is essential for transforming laboratory-scale chemistry into a reliable industrial process. In this review, we briefly introduce the role of automation based on its application in synthesis viz. auto sampling and inline monitoring, optimization and process control. Subsequently, we have critically reviewed a few multistep flow synthesis and suggested a possible control strategy to be implemented so that it helps to reliably transfer the laboratory-scale synthesis strategy to a pilot scale at its optimum conditions. Due to the vast literature in multistep synthesis, we have classified the literature and have identified the case studies based on few criteria viz. type of reaction, heating methods, processes involving in-line separation units, telescopic synthesis, processes involving in-line quenching and process with the smallest time scale of operation. This classification will cover the broader range in the multistep synthesis literature. PMID:28684977
Reliability Assessment of a Robust Design Under Uncertainty for a 3-D Flexible Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J. -W.; Newman, Perry A.
2003-01-01
The paper presents reliability assessment results for the robust designs under uncertainty of a 3-D flexible wing previously reported by the authors. Reliability assessments (additional optimization problems) of the active constraints at the various probabilistic robust design points are obtained and compared with the constraint values or target constraint probabilities specified in the robust design. In addition, reliability-based sensitivity derivatives with respect to design variable mean values are also obtained and shown to agree with finite difference values. These derivatives allow one to perform reliability based design without having to obtain second-order sensitivity derivatives. However, an inner-loop optimization problem must be solved for each active constraint to find the most probable point on that constraint failure surface.
NASA Technical Reports Server (NTRS)
Poberezhskiy, Ilya Y; Chang, Daniel H.; Erlig, Herman
2011-01-01
Optical metrology system reliability during a prolonged space mission is often limited by the reliability of pump laser diodes. We developed a metrology laser pump module architecture that meets NASA SIM Lite instrument optical power and reliability requirements by combining the outputs of multiple single-mode pump diodes in a low-loss, high port count fiber coupler. We describe Monte-Carlo simulations used to calculate the reliability of the laser pump module and introduce a combined laser farm aging parameter that serves as a load-sharing optimization metric. Employing these tools, we select pump module architecture, operating conditions, biasing approach and perform parameter sensitivity studies to investigate the robustness of the obtained solution.
Reliability models: the influence of model specification in generation expansion planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stremel, J.P.
1982-10-01
This paper is a critical evaluation of reliability methods used for generation expansion planning. It is shown that the methods for treating uncertainty are critical for determining the relative reliability value of expansion alternatives. It is also shown that the specification of the reliability model will not favor all expansion options equally. Consequently, the model is biased. In addition, reliability models should be augmented with an economic value of reliability (such as the cost of emergency procedures or energy not served). Generation expansion evaluations which ignore the economic value of excess reliability can be shown to be inconsistent. The conclusionsmore » are that, in general, a reliability model simplifies generation expansion planning evaluations. However, for a thorough analysis, the expansion options should be reviewed for candidates which may be unduly rejected because of the bias of the reliability model. And this implies that for a consistent formulation in an optimization framework, the reliability model should be replaced with a full economic optimization which includes the costs of emergency procedures and interruptions in the objective function.« less
Adaptive Optimization of Aircraft Engine Performance Using Neural Networks
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Long, Theresa W.
1995-01-01
Preliminary results are presented on the development of an adaptive neural network based control algorithm to enhance aircraft engine performance. This work builds upon a previous National Aeronautics and Space Administration (NASA) effort known as Performance Seeking Control (PSC). PSC is an adaptive control algorithm which contains a model of the aircraft's propulsion system which is updated on-line to match the operation of the aircraft's actual propulsion system. Information from the on-line model is used to adapt the control system during flight to allow optimal operation of the aircraft's propulsion system (inlet, engine, and nozzle) to improve aircraft engine performance without compromising reliability or operability. Performance Seeking Control has been shown to yield reductions in fuel flow, increases in thrust, and reductions in engine fan turbine inlet temperature. The neural network based adaptive control, like PSC, will contain a model of the propulsion system which will be used to calculate optimal control commands on-line. Hopes are that it will be able to provide some additional benefits above and beyond those of PSC. The PSC algorithm is computationally intensive, it is valid only at near steady-state flight conditions, and it has no way to adapt or learn on-line. These issues are being addressed in the development of the optimal neural controller. Specialized neural network processing hardware is being developed to run the software, the algorithm will be valid at steady-state and transient conditions, and will take advantage of the on-line learning capability of neural networks. Future plans include testing the neural network software and hardware prototype against an aircraft engine simulation. In this paper, the proposed neural network software and hardware is described and preliminary neural network training results are presented.
Study of Fuze Structure and Reliability Design Based on the Direct Search Method
NASA Astrophysics Data System (ADS)
Lin, Zhang; Ning, Wang
2017-03-01
Redundant design is one of the important methods to improve the reliability of the system, but mutual coupling of multiple factors is often involved in the design. In my study, Direct Search Method is introduced into the optimum redundancy configuration for design optimization, in which, the reliability, cost, structural weight and other factors can be taken into account simultaneously, and the redundant allocation and reliability design of aircraft critical system are computed. The results show that this method is convenient and workable, and applicable to the redundancy configurations and optimization of various designs upon appropriate modifications. And this method has a good practical value.
Finite Energy and Bounded Attacks on Control System Sensor Signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Djouadi, Seddik M; Melin, Alexander M; Ferragut, Erik M
Control system networks are increasingly being connected to enterprise level networks. These connections leave critical industrial controls systems vulnerable to cyber-attacks. Most of the effort in protecting these cyber-physical systems (CPS) has been in securing the networks using information security techniques and protection and reliability concerns at the control system level against random hardware and software failures. However, besides these failures the inability of information security techniques to protect against all intrusions means that the control system must be resilient to various signal attacks for which new analysis and detection methods need to be developed. In this paper, sensor signalmore » attacks are analyzed for observer-based controlled systems. The threat surface for sensor signal attacks is subdivided into denial of service, finite energy, and bounded attacks. In particular, the error signals between states of attack free systems and systems subject to these attacks are quantified. Optimal sensor and actuator signal attacks for the finite and infinite horizon linear quadratic (LQ) control in terms of maximizing the corresponding cost functions are computed. The closed-loop system under optimal signal attacks are provided. Illustrative numerical examples are provided together with an application to a power network with distributed LQ controllers.« less
Integrated control-system design via generalized LQG (GLQG) theory
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Hyland, David C.; Richter, Stephen; Haddad, Wassim M.
1989-01-01
Thirty years of control systems research has produced an enormous body of theoretical results in feedback synthesis. Yet such results see relatively little practical application, and there remains an unsettling gap between classical single-loop techniques (Nyquist, Bode, root locus, pole placement) and modern multivariable approaches (LQG and H infinity theory). Large scale, complex systems, such as high performance aircraft and flexible space structures, now demand efficient, reliable design of multivariable feedback controllers which optimally tradeoff performance against modeling accuracy, bandwidth, sensor noise, actuator power, and control law complexity. A methodology is described which encompasses numerous practical design constraints within a single unified formulation. The approach, which is based upon coupled systems or modified Riccati and Lyapunov equations, encompasses time-domain linear-quadratic-Gaussian theory and frequency-domain H theory, as well as classical objectives such as gain and phase margin via the Nyquist circle criterion. In addition, this approach encompasses the optimal projection approach to reduced-order controller design. The current status of the overall theory will be reviewed including both continuous-time and discrete-time (sampled-data) formulations.
Application of Boiler Op for combustion optimization at PEPCO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maines, P.; Williams, S.; Levy, E.
1997-09-01
Title IV requires the reduction of NOx at all stations within the PEPCO system. To assist PEPCO plant personnel in achieving low heat rates while meeting NOx targets, Lehigh University`s Energy Research Center and PEPCO developed a new combustion optimization software package called Boiler Op. The Boiler Op code contains an expert system, neural networks and an optimization algorithm. The expert system guides the plant engineer through a series of parametric boiler tests, required for the development of a comprehensive boiler database. The data are then analyzed by the neural networks and optimization algorithm to provide results on the boilermore » control settings which result in the best possible heat rate at a target NOx level or produce minimum NOx. Boiler Op has been used at both Potomac River and Morgantown Stations to help PEPCO engineers optimize combustion. With the use of Boiler Op, Morgantown Station operates under low NOx restrictions and continues to achieve record heat rate values, similar to pre-retrofit conditions. Potomac River Station achieves the regulatory NOx limit through the use of Boiler Op recommended control settings and without NOx burners. Importantly, any software like Boiler Op cannot be used alone. Its application must be in concert with human intelligence to ensure unit safety, reliability and accurate data collection.« less
NASA Astrophysics Data System (ADS)
Nirmala, D. B.; Raviraj, S.
2016-06-01
This paper presents the application of Taguchi approach to obtain optimal mix proportion for Self Compacting Concrete (SCC) containing spent foundry sand and M-sand. Spent foundry sand is used as a partial replacement for M-sand. The SCC mix has seven control factors namely, Coarse aggregate, M-sand with Spent Foundry sand, Cement, Fly ash, Water, Super plasticizer and Viscosity modifying agent. Modified Nan Su method is used to proportion the initial SCC mix. L18 (21×37) Orthogonal Arrays (OA) with the seven control factors having 3 levels is used in Taguchi approach which resulted in 18 SCC mix proportions. All mixtures are extensively tested both in fresh and hardened states to verify whether they meet the practical and technical requirements of SCC. The quality characteristics considering "Nominal the better" situation is applied to the test results to arrive at the optimal SCC mix proportion. Test results indicate that the optimal mix satisfies the requirements of fresh and hardened properties of SCC. The study reveals the feasibility of using spent foundry sand as a partial replacement of M-sand in SCC and also that Taguchi method is a reliable tool to arrive at optimal mix proportion of SCC.
NASA Astrophysics Data System (ADS)
Ragab, Kh. A.; Bouaicha, A.; Bouazara, M.
2017-09-01
The semi-solid casting process has the advantage of providing reliable mechanical aluminum parts that work continuously in dynamic as control arm of the suspension system in automotive vehicles. The quality performance of dynamic control arm is related to casting mold and gating system designs that affect the fluidity of semi-solid metal during filling the mold. Therefore, this study focuses on improvement in mechanical performance, depending on material characterization, and casting design optimization, of suspension control arms made of A357 aluminum semi-solid alloys. Mechanical and design analyses, applied on the suspension arm, showed the occurrence of mechanical failures at unexpected weak points. Metallurgical analysis showed that the main reason lies in the difficult flow of semi-solid paste through the thin thicknesses of a complex geometry. A design modification procedure is applied to the geometry of the suspension arm to avoid this problem and to improve its quality performance. The design modification of parts was carried out by using SolidWorks design software, evaluation of constraints with ABAQUS, and simulation of flow with ProCast software. The proposed designs showed that the modified suspension arm, without ribs and with a central canvas designed as Z, is considered as a perfect casting design showing an increase in the structural strength of the component. In this case, maximum von Mises stress is 199 MPa that is below the yield strength of the material. The modified casting mold design shows a high uniformity and minim turbulence of molten metal flow during semi-solid casting process.
NASA Astrophysics Data System (ADS)
Janardhanan, S.; Datta, B.
2011-12-01
Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of saltwater intrusion are considered. The salinity levels resulting at strategic locations due to these pumping are predicted using the ensemble surrogates and are constrained to be within pre-specified levels. Different realizations of the concentration values are obtained from the ensemble predictions corresponding to each candidate solution of pumping. Reliability concept is incorporated as the percent of the total number of surrogate models which satisfy the imposed constraints. The methodology was applied to a realistic coastal aquifer system in Burdekin delta area in Australia. It was found that all optimal solutions corresponding to a reliability level of 0.99 satisfy all the constraints and as reducing reliability level decreases the constraint violation increases. Thus ensemble surrogate model based simulation-optimization was found to be useful in deriving multi-objective optimal pumping strategies for coastal aquifers under parameter uncertainty.
Reliable data storage system design and implementation for acoustic logging while drilling
NASA Astrophysics Data System (ADS)
Hao, Xiaolong; Ju, Xiaodong; Wu, Xiling; Lu, Junqiang; Men, Baiyong; Yao, Yongchao; Liu, Dong
2016-12-01
Owing to the limitations of real-time transmission, reliable downhole data storage and fast ground reading have become key technologies in developing tools for acoustic logging while drilling (LWD). In order to improve the reliability of the downhole storage system in conditions of high temperature, intensive shake and periodic power supply, improvements were made in terms of hardware and software. In hardware, we integrated the storage system and data acquisition control module into one circuit board, to reduce the complexity of the storage process, by adopting the controller combination of digital signal processor and field programmable gate array. In software, we developed a systematic management strategy for reliable storage. Multiple-backup independent storage was employed to increase the data redundancy. A traditional error checking and correction (ECC) algorithm was improved and we embedded the calculated ECC code into all management data and waveform data. A real-time storage algorithm for arbitrary length data was designed to actively preserve the storage scene and ensure the independence of the stored data. The recovery procedure of management data was optimized to realize reliable self-recovery. A new bad block management idea of static block replacement and dynamic page mark was proposed to make the period of data acquisition and storage more balanced. In addition, we developed a portable ground data reading module based on a new reliable high speed bus to Ethernet interface to achieve fast reading of the logging data. Experiments have shown that this system can work stably below 155 °C with a periodic power supply. The effective ground data reading rate reaches 1.375 Mbps with 99.7% one-time success rate at room temperature. This work has high practical application significance in improving the reliability and field efficiency of acoustic LWD tools.
Valadez-Bustos, Ma Guadalupe; Aguado-Santacruz, Gerardo Armando; Tiessen-Favier, Axel; Robledo-Paz, Alejandrina; Muñoz-Orozco, Abel; Rascón-Cruz, Quintin; Santacruz-Varela, Amalio
2016-04-01
Glycine betaine is a quaternary ammonium compound that accumulates in a large variety of species in response to different types of stress. Glycine betaine counteracts adverse effects caused by abiotic factors, preventing the denaturation and inactivation of proteins. Thus, its determination is important, particularly for scientists focused on relating structural, biochemical, physiological, and/or molecular responses to plant water status. In the current work, we optimized the periodide technique for the determination of glycine betaine levels. This modification permitted large numbers of samples taken from a chlorophyllic cell line of the grass Bouteloua gracilis to be analyzed. Growth kinetics were assessed using the chlorophyllic suspension to determine glycine betaine levels in control (no stress) cells and cells osmotically stressed with 14 or 21% polyethylene glycol 8000. After glycine extraction, different wavelengths and reading times were evaluated in a spectrophotometer to determine the optimal quantification conditions for this osmolyte. Optimal results were obtained when readings were taken at a wavelength of 290 nm at 48 h after dissolving glycine betaine crystals in dichloroethane. We expect this modification to provide a simple, rapid, reliable, and cheap method for glycine betaine determination in plant samples and cell suspension cultures. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Siswanto, A.; Kurniati, N.
2018-04-01
An oil and gas company has 2,268 oil and gas wells. Well Barrier Element (WBE) is installed in a well to protect human, prevent asset damage and minimize harm to the environment. The primary WBE component is Surface Controlled Subsurface Safety Valve (SCSSV). The secondary WBE component is Christmas Tree Valves that consist of four valves i.e. Lower Master Valve (LMV), Upper Master Valve (UMV), Swab Valve (SV) and Wing Valve (WV). Current practice on WBE Preventive Maintenance (PM) program is conducted by considering the suggested schedule as stated on manual. Corrective Maintenance (CM) program is conducted when the component fails unexpectedly. Both PM and CM need cost and may cause production loss. This paper attempts to analyze the failure data and reliability based on historical data. Optimal PM interval is determined in order to minimize the total cost of maintenance per unit time. The optimal PM interval for SCSSV is 730 days, LMV is 985 days, UMV is 910 days, SV is 900 days and WV is 780 days. In average of all components, the cost reduction by implementing the suggested interval is 52%, while the reliability is improved by 4% and the availability is increased by 5%.
NASA Technical Reports Server (NTRS)
Miller, David W.; Uebelhart, Scott A.; Blaurock, Carl
2004-01-01
This report summarizes work performed by the Space Systems Laboratory (SSL) for NASA Langley Research Center in the field of performance optimization for systems subject to uncertainty. The objective of the research is to develop design methods and tools to the aerospace vehicle design process which take into account lifecycle uncertainties. It recognizes that uncertainty between the predictions of integrated models and data collected from the system in its operational environment is unavoidable. Given the presence of uncertainty, the goal of this work is to develop means of identifying critical sources of uncertainty, and to combine these with the analytical tools used with integrated modeling. In this manner, system uncertainty analysis becomes part of the design process, and can motivate redesign. The specific program objectives were: 1. To incorporate uncertainty modeling, propagation and analysis into the integrated (controls, structures, payloads, disturbances, etc.) design process to derive the error bars associated with performance predictions. 2. To apply modern optimization tools to guide in the expenditure of funds in a way that most cost-effectively improves the lifecycle productivity of the system by enhancing the subsystem reliability and redundancy. The results from the second program objective are described. This report describes the work and results for the first objective: uncertainty modeling, propagation, and synthesis with integrated modeling.
Performance Evaluation of IEEE 802.11ah Networks With High-Throughput Bidirectional Traffic.
Šljivo, Amina; Kerkhove, Dwight; Tian, Le; Famaey, Jeroen; Munteanu, Adrian; Moerman, Ingrid; Hoebeke, Jeroen; De Poorter, Eli
2018-01-23
So far, existing sub-GHz wireless communication technologies focused on low-bandwidth, long-range communication with large numbers of constrained devices. Although these characteristics are fine for many Internet of Things (IoT) applications, more demanding application requirements could not be met and legacy Internet technologies such as Transmission Control Protocol/Internet Protocol (TCP/IP) could not be used. This has changed with the advent of the new IEEE 802.11ah Wi-Fi standard, which is much more suitable for reliable bidirectional communication and high-throughput applications over a wide area (up to 1 km). The standard offers great possibilities for network performance optimization through a number of physical- and link-layer configurable features. However, given that the optimal configuration parameters depend on traffic patterns, the standard does not dictate how to determine them. Such a large number of configuration options can lead to sub-optimal or even incorrect configurations. Therefore, we investigated how two key mechanisms, Restricted Access Window (RAW) grouping and Traffic Indication Map (TIM) segmentation, influence scalability, throughput, latency and energy efficiency in the presence of bidirectional TCP/IP traffic. We considered both high-throughput video streaming traffic and large-scale reliable sensing traffic and investigated TCP behavior in both scenarios when the link layer introduces long delays. This article presents the relations between attainable throughput per station and attainable number of stations, as well as the influence of RAW, TIM and TCP parameters on both. We found that up to 20 continuously streaming IP-cameras can be reliably connected via IEEE 802.11ah with a maximum average data rate of 160 kbps, whereas 10 IP-cameras can achieve average data rates of up to 255 kbps over 200 m. Up to 6960 stations transmitting every 60 s can be connected over 1 km with no lost packets. The presented results enable the fine tuning of RAW and TIM parameters for throughput-demanding reliable applications (i.e., video streaming, firmware updates) on one hand, and very dense low-throughput reliable networks with bidirectional traffic on the other hand.
Performance Evaluation of IEEE 802.11ah Networks With High-Throughput Bidirectional Traffic
Kerkhove, Dwight; Tian, Le; Munteanu, Adrian; De Poorter, Eli
2018-01-01
So far, existing sub-GHz wireless communication technologies focused on low-bandwidth, long-range communication with large numbers of constrained devices. Although these characteristics are fine for many Internet of Things (IoT) applications, more demanding application requirements could not be met and legacy Internet technologies such as Transmission Control Protocol/Internet Protocol (TCP/IP) could not be used. This has changed with the advent of the new IEEE 802.11ah Wi-Fi standard, which is much more suitable for reliable bidirectional communication and high-throughput applications over a wide area (up to 1 km). The standard offers great possibilities for network performance optimization through a number of physical- and link-layer configurable features. However, given that the optimal configuration parameters depend on traffic patterns, the standard does not dictate how to determine them. Such a large number of configuration options can lead to sub-optimal or even incorrect configurations. Therefore, we investigated how two key mechanisms, Restricted Access Window (RAW) grouping and Traffic Indication Map (TIM) segmentation, influence scalability, throughput, latency and energy efficiency in the presence of bidirectional TCP/IP traffic. We considered both high-throughput video streaming traffic and large-scale reliable sensing traffic and investigated TCP behavior in both scenarios when the link layer introduces long delays. This article presents the relations between attainable throughput per station and attainable number of stations, as well as the influence of RAW, TIM and TCP parameters on both. We found that up to 20 continuously streaming IP-cameras can be reliably connected via IEEE 802.11ah with a maximum average data rate of 160 kbps, whereas 10 IP-cameras can achieve average data rates of up to 255 kbps over 200 m. Up to 6960 stations transmitting every 60 s can be connected over 1 km with no lost packets. The presented results enable the fine tuning of RAW and TIM parameters for throughput-demanding reliable applications (i.e., video streaming, firmware updates) on one hand, and very dense low-throughput reliable networks with bidirectional traffic on the other hand. PMID:29360798
Bryce, S D; Lee, S J; Ponsford, J L; Lawrence, R J; Tan, E J; Rossell, S L
2018-06-20
Cognitive remediation (CR) is considered a potentially effective method of improving cognitive function in people with schizophrenia. Few studies, however, have explored the role of intrinsic motivation on treatment utilization or training outcomes in CR in this population. This study explored the impact of task-specific intrinsic motivation on attendance and reliable cognitive improvement in a controlled trial comparing CR with a computer game (CG) playing control. Forty-nine participants with schizophrenia or schizoaffective disorder, allocated to 10 weeks of group-based CR (n = 25) or CG control (n = 24), provided complete outcome data at baseline. Forty-three participants completed their assigned intervention. Cognition, psychopathology and intrinsic motivation were measured at baseline and end-treatment. Regression analyses explored the relative contribution of baseline motivation and other clinical factors to session attendance as well as the association of baseline and change in intrinsic motivation with the odds of reliable cognitive improvement (calculated using reliable change indices). Baseline reports of perceived program value were the only significant multivariable predictor of session attendance when including global cognition and psychiatric symptomatology. The odds of reliable cognitive improvement significantly increased with greater improvements in program interest and value from baseline to end-treatment. Motivational changes over time were highly variable between participants. Task-specific intrinsic motivation in schizophrenia may represent an important patient-related factor that contributes to session attendance and cognitive improvements in CR. Regular evaluation and enhancement of intrinsic motivation in cognitively enhancing interventions may optimize treatment engagement and the likelihood of meaningful training outcomes. Copyright © 2018 Elsevier B.V. All rights reserved.
Lewis Structures Technology, 1988. Volume 1: Structural Dynamics
NASA Technical Reports Server (NTRS)
1988-01-01
The specific purpose of the symposium was to familiarize the engineering structures community with the depth and range of research performed by the Structures Division of the Lewis Research Center and its academic and industrial partners. Sessions covered vibration control, fracture mechanics, ceramic component reliability, parallel computing, nondestructive testing, dynamical systems, fatigue and damage, wind turbines, hot section technology, structural mechanics codes, computational methods for dynamics, structural optimization, and applications of structural dynamics.
Environment assisted degradation mechanisms in advanced light metals
NASA Technical Reports Server (NTRS)
Gangloff, R. P.; Stoner, G. E.; Swanson, R. E.
1989-01-01
A multifaceted research program on the performance of advanced light metallic alloys in aggressive aerospace environments, and associated environmental failure mechanisms was initiated. The general goal is to characterize alloy behavior quantitatively and to develop predictive mechanisms for environmental failure modes. Successes in this regard will provide the basis for metallurgical optimization of alloy performance, for chemical control of aggressive environments, and for engineering life prediction with damage tolerance and long term reliability.
DG Planning with Amalgamation of Operational and Reliability Considerations
NASA Astrophysics Data System (ADS)
Battu, Neelakanteshwar Rao; Abhyankar, A. R.; Senroy, Nilanjan
2016-04-01
Distributed Generation has been playing a vital role in dealing issues related to distribution systems. This paper presents an approach which provides policy maker with a set of solutions for DG placement to optimize reliability and real power loss of the system. Optimal location of a Distributed Generator is evaluated based on performance indices derived for reliability index and real power loss. The proposed approach is applied on a 15-bus radial distribution system and a 18-bus radial distribution system with conventional and wind distributed generators individually.
Optimization of shared autonomy vehicle control architectures for swarm operations.
Sengstacken, Aaron J; DeLaurentis, Daniel A; Akbarzadeh-T, Mohammad R
2010-08-01
The need for greater capacity in automotive transportation (in the midst of constrained resources) and the convergence of key technologies from multiple domains may eventually produce the emergence of a "swarm" concept of operations. The swarm, which is a collection of vehicles traveling at high speeds and in close proximity, will require technology and management techniques to ensure safe, efficient, and reliable vehicle interactions. We propose a shared autonomy control approach, in which the strengths of both human drivers and machines are employed in concert for this management. Building from a fuzzy logic control implementation, optimal architectures for shared autonomy addressing differing classes of drivers (represented by the driver's response time) are developed through a genetic-algorithm-based search for preferred fuzzy rules. Additionally, a form of "phase transition" from a safe to an unsafe swarm architecture as the amount of sensor capability is varied uncovers key insights on the required technology to enable successful shared autonomy for swarm operations.
NASA Technical Reports Server (NTRS)
Hess, R. A.
1976-01-01
Paramount to proper utilization of electronic displays is a method for determining pilot-centered display requirements. Display design should be viewed fundamentally as a guidance and control problem which has interactions with the designer's knowledge of human psychomotor activity. From this standpoint, reliable analytical models of human pilots as information processors and controllers can provide valuable insight into the display design process. A relatively straightforward, nearly algorithmic procedure for deriving model-based, pilot-centered display requirements was developed and is presented. The optimal or control theoretic pilot model serves as the backbone of the design methodology, which is specifically directed toward the synthesis of head-down, electronic, cockpit display formats. Some novel applications of the optimal pilot model are discussed. An analytical design example is offered which defines a format for the electronic display to be used in a UH-1H helicopter in a landing approach task involving longitudinal and lateral degrees of freedom.
Improvement of up-converting phosphor technology-based biosensor
NASA Astrophysics Data System (ADS)
Xie, Chengke; Huang, Lihua; Zhang, Youbao; Guo, Xiaoxian; Qu, Jianfeng; Huang, Huijie
2008-12-01
A novel biosensor based on up-converting phosphor technology (UPT) was developed several years ago. It is a kind of optical biosensor using up-converting phosphor (UCP) particles as the biological marker. From then on, some improvements have been made for this UPT-based biosensor. The primary aspects of the improvement lie in the control system. On one hand, the hardware of the control system has been optimized, including replacing two single chip microcomputers (SCM) with only one, the optimal design of the keyboard interface circuit and the liquid crystal module (LCM) control circuit et al.. These result in lower power consumption and higher reliability. On the other hand, a novel signal processing algorithm is proposed in this paper, which can improve the automation and operating simplicity of the UPT-based biosensor. It has proved to have high sensitivity (~ng/ml), high stability and good repeatability (CV<5%), which is better than the former system. It can meet the need of some various applications such as rapid immunoassay, chemical and biological detection and so on.
NASA Astrophysics Data System (ADS)
Luchner, Jakob; Anghileri, Daniela; Castelletti, Andrea
2017-04-01
Real-time control of multi-purpose reservoirs can benefit significantly from hydro-meteorological forecast products. Because of their reliability, the most used forecasts range on time scales from hours to few days and are suitable for short-term operation targets such as flood control. In recent years, hydro-meteorological forecasts have become more accurate and reliable on longer time scales, which are more relevant to long-term reservoir operation targets such as water supply. While the forecast quality of such products has been studied extensively, the forecast value, i.e. the operational effectiveness of using forecasts to support water management, has been only relatively explored. It is comparatively easy to identify the most effective forecasting information needed to design reservoir operation rules for flood control but it is not straightforward to identify which forecast variable and lead time is needed to define effective hedging rules for operational targets with slow dynamics such as water supply. The task is even more complex when multiple targets, with diverse slow and fast dynamics, are considered at the same time. In these cases, the relative importance of different pieces of information, e.g. magnitude and timing of peak flow rate and accumulated inflow on different time lags, may vary depending on the season or the hydrological conditions. In this work, we analyze the relationship between operational forecast value and streamflow forecast horizon for different multi-purpose reservoir trade-offs. We use the Information Selection and Assessment (ISA) framework to identify the most effective forecast variables and horizons for informing multi-objective reservoir operation over short- and long-term temporal scales. The ISA framework is an automatic iterative procedure to discriminate the information with the highest potential to improve multi-objective reservoir operating performance. Forecast variables and horizons are selected using a feature selection technique. The technique determines the most informative combination in a multi-variate regression model to the optimal reservoir releases based on perfect information at a fixed objective trade-off. The improved reservoir operation is evaluated against optimal reservoir operation conditioned upon perfect information on future disturbances and basic reservoir operation using only the day of the year and the reservoir level. Different objective trade-offs are selected for analyzing resulting differences in improved reservoir operation and selected forecast variables and horizons. For comparison, the effective streamflow forecast horizon determined by the ISA framework is benchmarked against the performances obtained with a deterministic model predictive control (MPC) optimization scheme. Both the ISA framework and the MPC optimization scheme are applied to the real-world case study of Lake Como, Italy, using perfect streamflow forecast information. The principal operation targets for Lake Como are flood control and downstream water supply which makes its operation a suitable case study. Results provide critical feedback to reservoir operators on the use of long-term streamflow forecasts and to the hydro-meteorological forecasting community with respect to the forecast horizon needed from reliable streamflow forecasts.
Optimizing the switching time for 400 kV SF6 circuit breakers
NASA Astrophysics Data System (ADS)
Ciulica, D.
2018-01-01
This paper presents real-time voltage and current analysis for optimizing the wave switching point of the circuit breaker SF6. Circuit Breaker plays an important role in power systems. It provides protection for equipment in embedded stations in transport networks. SF6 Circuit Breaker is very important equipment in Power Systems, which is used for up to 400 kV due to its excellent performance. The controlled switching is used to eliminate transient modes and electrodynamic and dielectric charges in the network at manual switching of capacitor, shunt reactors and power transformers. These effects reduce the reliability and lifetime of the equipment installed on the network, or may lead to erroneous protection.
USDA-ARS?s Scientific Manuscript database
Expressed sequence tag (EST) simple sequence repeats (SSRs) in Prunus were mined, and flanking primers designed and used for genome-wide characterization and selection of primers to optimize marker distribution and reliability. A total of 12,618 contigs were assembled from 84,727 ESTs, along with 34...
On the Relationship between Maximal Reliability and Maximal Validity of Linear Composites
ERIC Educational Resources Information Center
Penev, Spiridon; Raykov, Tenko
2006-01-01
A linear combination of a set of measures is often sought as an overall score summarizing subject performance. The weights in this composite can be selected to maximize its reliability or to maximize its validity, and the optimal choice of weights is in general not the same for these two optimality criteria. We explore several relationships…
Multiagent Flight Control in Dynamic Environments with Cooperative Coevolutionary Algorithms
NASA Technical Reports Server (NTRS)
Knudson, Matthew D.; Colby, Mitchell; Tumer, Kagan
2014-01-01
Dynamic flight environments in which objectives and environmental features change with respect to time pose a difficult problem with regards to planning optimal flight paths. Path planning methods are typically computationally expensive, and are often difficult to implement in real time if system objectives are changed. This computational problem is compounded when multiple agents are present in the system, as the state and action space grows exponentially. In this work, we use cooperative coevolutionary algorithms in order to develop policies which control agent motion in a dynamic multiagent unmanned aerial system environment such that goals and perceptions change, while ensuring safety constraints are not violated. Rather than replanning new paths when the environment changes, we develop a policy which can map the new environmental features to a trajectory for the agent while ensuring safe and reliable operation, while providing 92% of the theoretically optimal performance
Multiagent Flight Control in Dynamic Environments with Cooperative Coevolutionary Algorithms
NASA Technical Reports Server (NTRS)
Colby, Mitchell; Knudson, Matthew D.; Tumer, Kagan
2014-01-01
Dynamic environments in which objectives and environmental features change with respect to time pose a difficult problem with regards to planning optimal paths through these environments. Path planning methods are typically computationally expensive, and are often difficult to implement in real time if system objectives are changed. This computational problem is compounded when multiple agents are present in the system, as the state and action space grows exponentially with the number of agents in the system. In this work, we use cooperative coevolutionary algorithms in order to develop policies which control agent motion in a dynamic multiagent unmanned aerial system environment such that goals and perceptions change, while ensuring safety constraints are not violated. Rather than replanning new paths when the environment changes, we develop a policy which can map the new environmental features to a trajectory for the agent while ensuring safe and reliable operation, while providing 92% of the theoretically optimal performance.
Microgrid Controllers : Expanding Their Role and Evaluating Their Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maitra, Arindam; Pratt, Annabelle; Hubert, Tanguy
Microgrids have long been deployed to provide power to customers in remote areas as well as critical industrial and military loads. Today, they are also being proposed as grid-interactive solutions for energy-resilient communities. Such microgrids will spend most of the time operating while synchronized with the surrounding utility grid but will also be capable of separating during contingency periods due to storms or temporary disturbances such as local grid faults. Properly designed and grid-integrated microgrids can provide the flexibility, reliability, and resiliency needs of both the future grid and critical customers. These systems can be an integral part of futuremore » power system designs that optimize investments to achieve operational goals, improved reliability, and diversification of energy sources.« less
Terayama, Yasushi; Uchiyama, Shigeharu; Ueda, Kazuhiko; Iwakura, Nahoko; Ikegami, Shota; Kato, Yoshiharu; Kato, Hiroyuki
2018-06-01
Imaging criteria for diagnosing compressive ulnar neuropathy at the elbow (UNE) have recently been established as the maximum ulnar nerve cross-sectional area (UNCSA) upon magnetic resonance imaging (MRI) and/or ultrasonography (US). However, the levels of maximum UNCSA and diagnostic cutoff values have not yet been established. We therefore analyzed UNCSA by MRI and US in patients with UNE and in controls. We measured UNCSA at 7 levels in 30 patients with UNE and 28 controls by MRI and at 15 levels in 12 patients with UNE and 24 controls by US. We compared UNCSA as determined by MRI or US and determined optimal diagnostic cutoff values based on receiver operating characteristic curve analysis. The UNCSA was significantly larger in the UNE group than in controls at 3, 2, 1, and 0 cm proximal and 1, 2, and 3 cm distal to the medial epicondyle for both modalities. The UNCSA was maximal at 1 cm proximal to the medial epicondyle for MRI (16.1 ± 3.5 mm 2 ) as well as for US (17 ± 7 mm 2 ). A cutoff value of 11.0 mm 2 for MRI and US was found to be optimal for differentiating between patients with UNE and controls, with an area under the receiver operating characteristic curve of 0.95 for MRI and 0.96 for US. The UNCSA measured by MRI was not significantly different from that by US. Intra-rater and interrater reliabilities for UNCSA were all greater than 0.77. The UNCSA in the severe nerve dysfunction group of 18 patients was significantly larger than that in the mild nerve dysfunction group of 12 patients. By measuring UNCSA with MRI or US at 1 cm proximal to the ME, patients with and without UNE could be discriminated at a cutoff threshold of 11.0 mm 2 with high sensitivity, specificity, and reliability. Diagnostic III. Copyright © 2018 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Autonomous Optimization of Targeted Stimulation of Neuronal Networks
Kumar, Sreedhar S.; Wülfing, Jan; Okujeni, Samora; Boedecker, Joschka; Riedmiller, Martin
2016-01-01
Driven by clinical needs and progress in neurotechnology, targeted interaction with neuronal networks is of increasing importance. Yet, the dynamics of interaction between intrinsic ongoing activity in neuronal networks and their response to stimulation is unknown. Nonetheless, electrical stimulation of the brain is increasingly explored as a therapeutic strategy and as a means to artificially inject information into neural circuits. Strategies using regular or event-triggered fixed stimuli discount the influence of ongoing neuronal activity on the stimulation outcome and are therefore not optimal to induce specific responses reliably. Yet, without suitable mechanistic models, it is hardly possible to optimize such interactions, in particular when desired response features are network-dependent and are initially unknown. In this proof-of-principle study, we present an experimental paradigm using reinforcement-learning (RL) to optimize stimulus settings autonomously and evaluate the learned control strategy using phenomenological models. We asked how to (1) capture the interaction of ongoing network activity, electrical stimulation and evoked responses in a quantifiable ‘state’ to formulate a well-posed control problem, (2) find the optimal state for stimulation, and (3) evaluate the quality of the solution found. Electrical stimulation of generic neuronal networks grown from rat cortical tissue in vitro evoked bursts of action potentials (responses). We show that the dynamic interplay of their magnitudes and the probability to be intercepted by spontaneous events defines a trade-off scenario with a network-specific unique optimal latency maximizing stimulus efficacy. An RL controller was set to find this optimum autonomously. Across networks, stimulation efficacy increased in 90% of the sessions after learning and learned latencies strongly agreed with those predicted from open-loop experiments. Our results show that autonomous techniques can exploit quantitative relationships underlying activity-response interaction in biological neuronal networks to choose optimal actions. Simple phenomenological models can be useful to validate the quality of the resulting controllers. PMID:27509295
Autonomous Optimization of Targeted Stimulation of Neuronal Networks.
Kumar, Sreedhar S; Wülfing, Jan; Okujeni, Samora; Boedecker, Joschka; Riedmiller, Martin; Egert, Ulrich
2016-08-01
Driven by clinical needs and progress in neurotechnology, targeted interaction with neuronal networks is of increasing importance. Yet, the dynamics of interaction between intrinsic ongoing activity in neuronal networks and their response to stimulation is unknown. Nonetheless, electrical stimulation of the brain is increasingly explored as a therapeutic strategy and as a means to artificially inject information into neural circuits. Strategies using regular or event-triggered fixed stimuli discount the influence of ongoing neuronal activity on the stimulation outcome and are therefore not optimal to induce specific responses reliably. Yet, without suitable mechanistic models, it is hardly possible to optimize such interactions, in particular when desired response features are network-dependent and are initially unknown. In this proof-of-principle study, we present an experimental paradigm using reinforcement-learning (RL) to optimize stimulus settings autonomously and evaluate the learned control strategy using phenomenological models. We asked how to (1) capture the interaction of ongoing network activity, electrical stimulation and evoked responses in a quantifiable 'state' to formulate a well-posed control problem, (2) find the optimal state for stimulation, and (3) evaluate the quality of the solution found. Electrical stimulation of generic neuronal networks grown from rat cortical tissue in vitro evoked bursts of action potentials (responses). We show that the dynamic interplay of their magnitudes and the probability to be intercepted by spontaneous events defines a trade-off scenario with a network-specific unique optimal latency maximizing stimulus efficacy. An RL controller was set to find this optimum autonomously. Across networks, stimulation efficacy increased in 90% of the sessions after learning and learned latencies strongly agreed with those predicted from open-loop experiments. Our results show that autonomous techniques can exploit quantitative relationships underlying activity-response interaction in biological neuronal networks to choose optimal actions. Simple phenomenological models can be useful to validate the quality of the resulting controllers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, L.
ITN Energy Systems, Inc., and Global Solar Energy, Inc., with the assistance of NREL's PV Manufacturing R&D program, have continued the advancement of CIGS production technology through the development of trajectory-oriented predictive/control models, fault-tolerance control, control-platform development, in-situ sensors, and process improvements. Modeling activities to date include the development of physics-based and empirical models for CIGS and sputter-deposition processing, implementation of model-based control, and application of predictive models to the construction of new evaporation sources and for control. Model-based control is enabled through implementation of reduced or empirical models into a control platform. Reliability improvement activities include implementation of preventivemore » maintenance schedules; detection of failed sensors/equipment and reconfiguration to continue processing; and systematic development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which, in turn, have been enabled by control and reliability improvements due to this PV Manufacturing R&D program. This has resulted in substantial improvements of flexible CIGS PV module performance and efficiency.« less
NASA Astrophysics Data System (ADS)
Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes
2004-12-01
In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .
Modeling and Simulation Reliable Spacecraft On-Board Computing
NASA Technical Reports Server (NTRS)
Park, Nohpill
1999-01-01
The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.
NASA Astrophysics Data System (ADS)
Khatibinia, M.; Salajegheh, E.; Salajegheh, J.; Fadaee, M. J.
2013-10-01
A new discrete gravitational search algorithm (DGSA) and a metamodelling framework are introduced for reliability-based design optimization (RBDO) of reinforced concrete structures. The RBDO of structures with soil-structure interaction (SSI) effects is investigated in accordance with performance-based design. The proposed DGSA is based on the standard gravitational search algorithm (GSA) to optimize the structural cost under deterministic and probabilistic constraints. The Monte-Carlo simulation (MCS) method is considered as the most reliable method for estimating the probabilities of reliability. In order to reduce the computational time of MCS, the proposed metamodelling framework is employed to predict the responses of the SSI system in the RBDO procedure. The metamodel consists of a weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, which is called WWLS-SVM. Numerical results demonstrate the efficiency and computational advantages of DGSA and the proposed metamodel for RBDO of reinforced concrete structures.
Electromagnetic Smart Valves for Cryogenic Applications
NASA Astrophysics Data System (ADS)
Traum, M. J.; Smith, J. L.; Brisson, J. G.; Gerstmann, J.; Hannon, C. L.
2004-06-01
Electromagnetic valves with smart control capability have been developed and demonstrated for use in the cold end of a Collins-style cryocooler. The toroidal geometry of the valves was developed utilizing a finite-element code and optimized for maximum opening force with minimum input current. Electromagnetic smart valves carry two primary benefits in cryogenic applications: 1) magnetic actuation eliminates the need for mechanical linkages and 2) valve timing can be modified during system cool down and in regular operation for cycle optimization. The smart feature of these electromagnetic valves resides in controlling the flow of current into the magnetic coil. Electronics have been designed to shape the valve actuation current, limiting the residence time of magnetic energy in the winding. This feature allows control of flow through the expander via an electrical signal while dissipating less than 0.0071 J/cycle as heat into the cold end. The electromagnetic smart valves have demonstrated reliable, controllable dynamic cycling. After 40 hours of operation, they suffered no perceptible mechanical degradation. These features enable the development of a miniaturized Collins-style cryocooler capable of removing 1 Watt of heat at 10 K.
Dowd, Jason E; Jubb, Anthea; Kwok, K Ezra; Piret, James M
2003-05-01
Consistent perfusion culture production requires reliable cell retention and control of feed rates. An on-line cell probe based on capacitance was used to assay viable biomass concentrations. A constant cell specific perfusion rate controlled medium feed rates with a bioreactor cell concentration of approximately 5 x 10(6) cells mL(-1). Perfusion feeding was automatically adjusted based on the cell concentration signal from the on-line biomass sensor. Cell specific perfusion rates were varied over a range of 0.05 to 0.4 nL cell(-1) day(-1). Pseudo-steady-state bioreactor indices (concentrations, cellular rates and yields) were correlated to cell specific perfusion rates investigated to maximize recombinant protein production from a Chinese hamster ovary cell line. The tissue-type plasminogen activator concentration was maximized ( approximately 40 mg L(-1)) at 0.2 nL cell(-1) day(-1). The volumetric protein productivity ( approximately 60 mg L(-1) day(-1) was maximized above 0.3 nL cell(-1) day(-1). The use of cell specific perfusion rates provided a straightforward basis for controlling, modeling and optimizing perfusion cultures.
NASA Astrophysics Data System (ADS)
Llopis-Albert, C.; Peña-Haro, S.; Pulido-Velazquez, M.; Molina, J.
2012-04-01
Water quality management is complex due to the inter-relations between socio-political, environmental and economic constraints and objectives. In order to choose an appropriate policy to reduce nitrate pollution in groundwater it is necessary to consider different objectives, often in conflict. In this paper, a hydro-economic modeling framework, based on a non-linear optimization(CONOPT) technique, which embeds simulation of groundwater mass transport through concentration response matrices, is used to study optimal policies for groundwater nitrate pollution control under different objectives and constraints. Three objectives were considered: recovery time (for meeting the environmental standards, as required by the EU Water Framework Directive and Groundwater Directive), maximum nitrate concentration in groundwater, and net benefits in agriculture. Another criterion was added: the reliability of meeting the nitrate concentration standards. The approach allows deriving the trade-offs between the reliability of meeting the standard, the net benefits from agricultural production and the recovery time. Two different policies were considered: spatially distributed fertilizer standards or quotas (obtained through multi-objective optimization) and fertilizer prices. The multi-objective analysis allows to compare the achievement of the different policies, Pareto fronts (or efficiency frontiers) and tradeoffs for the set of mutually conflicting objectives. The constraint method is applied to generate the set of non-dominated solutions. The multi-objective framework can be used to design groundwater management policies taking into consideration different stakeholders' interests (e.g., policy makers, agricultures or environmental groups). The methodology was applied to the El Salobral-Los Llanos aquifer in Spain. Over the past 30 years the area has undertaken a significant socioeconomic development, mainly due to the intensive groundwater use for irrigated crops, which has provoked a steady decline of groundwater levels as well as high nitrate concentrations at certain locations (above 50 mg/l.). The results showed the usefulness of this multi-objective hydro-economic approach for designing sustainable nitrate pollution control policies (as fertilizer quotas or efficient fertilizer pricing policies) with insight into the economic cost of satisfying the environmental constraints and the tradeoffs with different time horizons.
Structural Optimization for Reliability Using Nonlinear Goal Programming
NASA Technical Reports Server (NTRS)
El-Sayed, Mohamed E.
1999-01-01
This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.
NASA Astrophysics Data System (ADS)
Smits, K. M.; Drumheller, Z. W.; Lee, J. H.; Illangasekare, T. H.; Regnery, J.; Kitanidis, P. K.
2015-12-01
Aquifers around the world show troubling signs of irreversible depletion and seawater intrusion as climate change, population growth, and urbanization lead to reduced natural recharge rates and overuse. Scientists and engineers have begun to revisit the technology of managed aquifer recharge and recovery (MAR) as a means to increase the reliability of the diminishing and increasingly variable groundwater supply. Unfortunately, MAR systems remain wrought with operational challenges related to the quality and quantity of recharged and recovered water stemming from a lack of data-driven, real-time control. This research seeks to develop and validate a general simulation-based control optimization algorithm that relies on real-time data collected though embedded sensors that can be used to ease the operational challenges of MAR facilities. Experiments to validate the control algorithm were conducted at the laboratory scale in a two-dimensional synthetic aquifer under both homogeneous and heterogeneous packing configurations. The synthetic aquifer used well characterized technical sands and the electrical conductivity signal of an inorganic conservative tracer as a surrogate measure for water quality. The synthetic aquifer was outfitted with an array of sensors and an autonomous pumping system. Experimental results verified the feasibility of the approach and suggested that the system can improve the operation of MAR facilities. The dynamic parameter inversion reduced the average error between the simulated and observed pressures between 12.5 and 71.4%. The control optimization algorithm ran smoothly and generated optimal control decisions. Overall, results suggest that with some improvements to the inversion and interpolation algorithms, which can be further advanced through testing with laboratory experiments using sensors, the concept can successfully improve the operation of MAR facilities.
Turbine Performance Optimization Task Status
NASA Technical Reports Server (NTRS)
Griffin, Lisa W.; Turner, James E. (Technical Monitor)
2001-01-01
Capability to optimize for turbine performance and accurately predict unsteady loads will allow for increased reliability, Isp, and thrust-to-weight. The development of a fast, accurate aerodynamic design, analysis, and optimization system is required.
NASA Astrophysics Data System (ADS)
Drumheller, Z. W.; Regnery, J.; Lee, J. H.; Illangasekare, T. H.; Kitanidis, P. K.; Smits, K. M.
2014-12-01
Aquifers around the world show troubling signs of irreversible depletion and seawater intrusion as climate change, population growth, and urbanization led to reduced natural recharge rates and overuse. Scientists and engineers have begun to re-investigate the technology of managed aquifer recharge and recovery (MAR) as a means to increase the reliability of the diminishing and increasingly variable groundwater supply. MAR systems offer the possibility of naturally increasing groundwater storage while improving the quality of impaired water used for recharge. Unfortunately, MAR systems remain wrought with operational challenges related to the quality and quantity of recharged and recovered water stemming from a lack of data-driven, real-time control. Our project seeks to ease the operational challenges of MAR facilities through the implementation of active sensor networks, adaptively calibrated flow and transport models, and simulation-based meta-heuristic control optimization methods. The developed system works by continually collecting hydraulic and water quality data from a sensor network embedded within the aquifer. The data is fed into an inversion algorithm, which calibrates the parameters and initial conditions of a predictive flow and transport model. The calibrated model is passed to a meta-heuristic control optimization algorithm (e.g. genetic algorithm) to execute the simulations and determine the best course of action, i.e., the optimal pumping policy for current aquifer conditions. The optimal pumping policy is manually or autonomously applied. During operation, sensor data are used to assess the accuracy of the optimal prediction and augment the pumping strategy as needed. At laboratory-scale, a small (18"H x 46"L) and an intermediate (6'H x 16'L) two-dimensional synthetic aquifer were constructed and outfitted with sensor networks. Data collection and model inversion components were developed and sensor data were validated by analytical measurements.
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali
2015-01-01
This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…
Research on the optimal structure configuration of dither RLG used in skewed redundant INS
NASA Astrophysics Data System (ADS)
Gao, Chunfeng; Wang, Qi; Wei, Guo; Long, Xingwu
2016-05-01
The actual combat effectiveness of weapon equipment is restricted by the performance of Inertial Navigation System (INS), especially in high reliability required situations such as fighter, satellite and submarine. Through the use of skewed sensor geometries, redundant technique has been applied to reduce the cost and improve the reliability of the INS. In this paper, the structure configuration and the inertial sensor characteristics of Skewed Redundant Strapdown Inertial Navigation System (SRSINS) using dithered Ring Laser Gyroscope (RLG) are analyzed. For the dither coupling effects of the dither gyro, the system measurement errors can be amplified either the individual gyro dither frequency is near one another or the structure of the SRSINS is unreasonable. Based on the characteristics of RLG, the research on coupled vibration of dithered RLG in SRSINS is carried out. On the principle of optimal navigation performance, optimal reliability and optimal cost-effectiveness, the comprehensive evaluation scheme of the inertial sensor configuration of SRINS is given.
Optimal maintenance of a multi-unit system under dependencies
NASA Astrophysics Data System (ADS)
Sung, Ho-Joon
The availability, or reliability, of an engineering component greatly influences the operational cost and safety characteristics of a modern system over its life-cycle. Until recently, the reliance on past empirical data has been the industry-standard practice to develop maintenance policies that provide the minimum level of system reliability. Because such empirically-derived policies are vulnerable to unforeseen or fast-changing external factors, recent advancements in the study of topic on maintenance, which is known as optimal maintenance problem, has gained considerable interest as a legitimate area of research. An extensive body of applicable work is available, ranging from those concerned with identifying maintenance policies aimed at providing required system availability at minimum possible cost, to topics on imperfect maintenance of multi-unit system under dependencies. Nonetheless, these existing mathematical approaches to solve for optimal maintenance policies must be treated with caution when considered for broader applications, as they are accompanied by specialized treatments to ease the mathematical derivation of unknown functions in both objective function and constraint for a given optimal maintenance problem. These unknown functions are defined as reliability measures in this thesis, and theses measures (e.g., expected number of failures, system renewal cycle, expected system up time, etc.) do not often lend themselves to possess closed-form formulas. It is thus quite common to impose simplifying assumptions on input probability distributions of components' lifetime or repair policies. Simplifying the complex structure of a multi-unit system to a k-out-of-n system by neglecting any sources of dependencies is another commonly practiced technique intended to increase the mathematical tractability of a particular model. This dissertation presents a proposal for an alternative methodology to solve optimal maintenance problems by aiming to achieve the same end-goals as Reliability Centered Maintenance (RCM). RCM was first introduced to the aircraft industry in an attempt to bridge the gap between the empirically-driven and theory-driven approaches to establishing optimal maintenance policies. Under RCM, qualitative processes that enable the prioritizing of functions based on the criticality and influence would be combined with mathematical modeling to obtain the optimal maintenance policies. Where this thesis work deviates from RCM is its proposal to directly apply quantitative processes to model the reliability measures in optimal maintenance problem. First, Monte Carlo (MC) simulation, in conjunction with a pre-determined Design of Experiments (DOE) table, can be used as a numerical means of obtaining the corresponding discrete simulated outcomes of the reliability measures based on the combination of decision variables (e.g., periodic preventive maintenance interval, trigger age for opportunistic maintenance, etc.). These discrete simulation results can then be regressed as Response Surface Equations (RSEs) with respect to the decision variables. Such an approach to represent the reliability measures with continuous surrogate functions (i.e., the RSEs) not only enables the application of the numerical optimization technique to solve for optimal maintenance policies, but also obviates the need to make mathematical assumptions or impose over-simplifications on the structure of a multi-unit system for the sake of mathematical tractability. The applicability of the proposed methodology to a real-world optimal maintenance problem is showcased through its application to a Time Limited Dispatch (TLD) of Full Authority Digital Engine Control (FADEC) system. In broader terms, this proof-of-concept exercise can be described as a constrained optimization problem, whose objective is to identify the optimal system inspection interval that guarantees a certain level of availability for a multi-unit system. A variety of reputable numerical techniques were used to model the problem as accurately as possible, including algorithms for the MC simulation, imperfect maintenance model from quasi renewal processes, repair time simulation, and state transition rules. Variance Reduction Techniques (VRTs) were also used in an effort to enhance MC simulation efficiency. After accurate MC simulation results are obtained, the RSEs are generated based on the goodness-of-fit measure to yield as parsimonious model as possible to construct the optimization problem. Under the assumption of constant failure rate for lifetime distributions, the inspection interval from the proposed methodology was found to be consistent with the one from the common approach used in industry that leverages Continuous Time Markov Chain (CTMC). While the latter does not consider maintenance cost settings, the proposed methodology enables an operator to consider different types of maintenance cost settings, e.g., inspection cost, system corrective maintenance cost, etc., to result in more flexible maintenance policies. When the proposed methodology was applied to the same TLD of FADEC example, but under the more generalized assumption of strictly Increasing Failure Rate (IFR) for lifetime distribution, it was shown to successfully capture component wear-out, as well as the economic dependencies among the system components.
Optimal hydraulic design of new-type shaft tubular pumping system
NASA Astrophysics Data System (ADS)
Zhu, H. G.; Zhang, R. T.; Zhou, J. R.
2012-11-01
Based on the characteristics of large flow rate, low-head, short annual operation time and high reliability of city flood-control pumping stations, a new-type shaft tubular pumping system featuring shaft suction box, siphon-type discharge passage with vacuum breaker as cutoff device was put forward, which possesses such advantages as simpler structure, reliable cutoff and higher energy performance. According to the design parameters of a city flood control pumping station, a numerical computation model was set up including shaft-type suction box, siphon-type discharge passage, pump impeller and guide vanes. By using commercial CFD software Fluent, RNG κ-epsilon turbulence model was adopted to close the three-dimensional time-averaged incompressible N-S equations. After completing optimal hydraulic design of shaft-type suction box, and keeping the parameters of total length, maximum width and outlet section unchanged, siphon-type discharge passages of three hump locations and three hump heights were designed and numerical analysis on the 9 hydraulic design schemes of pumping system were proceeded. The computational results show that the changing of hump locations and hump heights directly affects the internal flow patterns of discharge passages and hydraulic performances of the system, and when hump is located 3.66D from the inlet section and hump height is about 0.65D (D is the diameter of pump impeller), the new-type shaft tubular pumping system achieves better energy performances. A pumping system model test of the optimal designed scheme was carried out. The result shows that the highest pumping system efficiency reaches 75.96%, and when at design head of 1.15m the flow rate and system efficiency were 0.304m3/s and 63.10%, respectively. Thus, the validity of optimal design method was verified by the model test, and a solid foundation was laid for the application and extension of the new-type shaft tubular pumping system.
Lim, Chun Shen; Krishnan, Gopala; Sam, Choon Kook; Ng, Ching Ching
2013-01-16
Because blocking agent occupies most binding surface of a solid phase, its ability to prevent nonspecific binding determines the signal-to-noise ratio (SNR) and reliability of an enzyme-linked immunosorbent assay (ELISA). We demonstrate a stepwise approach to seek a compatible blocking buffer for indirect ELISA, via a case-control study (n=176) of Epstein-Barr virus (EBV)-associated nasopharyngeal carcinoma (NPC). Regardless of case-control status, we found that synthetic polymer blocking agents, mainly Ficoll and poly(vinyl alcohol) (PVA) were able to provide homogeneous backgrounds among samples, as opposed to commonly used blocking agents, notably nonfat dry milk (NFDM). The SNRs for NPC samples that correspond to blocking using PVA were approximately 3-fold, on average, higher than those blocking using NFDM. Both intra- and inter-assay precisions of PVA-based assays were <14%. A blocking agent of choice should have tolerable sample backgrounds from both cases and controls to ensure the reliability of an immunoassay. Copyright © 2012 Elsevier B.V. All rights reserved.
Use telecommunications for real-time process control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zilberman, I.; Bigman, J.; Sela, I.
1996-05-01
Process operators design real-time accurate information to monitor and control product streams and to optimize unit operations. The challenge is how to cost-effectively install sophisticated analytical equipment in harsh environments such as process areas and maintain system reliability. Incorporating telecommunications technology with near infrared (NIR) spectroscopy may be the bridge to help operations achieve their online control goals. Coupling communications fiber optics with NIR analyzers enables the probe and sampling system to remain in the field and crucial analytical equipment to be remotely located in a general purpose area without specialized protection provisions. The case histories show how two refineriesmore » used NIR spectroscopy online to track octane levels for reformate streams.« less
Control of the transition between regular and mach reflection of shock waves
NASA Astrophysics Data System (ADS)
Alekseev, A. K.
2012-06-01
A control problem was considered that makes it possible to switch the flow between stationary Mach and regular reflection of shock waves within the dual solution domain. The sensitivity of the flow was computed by solving adjoint equations. A control disturbance was sought by applying gradient optimization methods. According to the computational results, the transition from regular to Mach reflection can be executed by raising the temperature. The transition from Mach to regular reflection can be achieved by lowering the temperature at moderate Mach numbers and is impossible at large numbers. The reliability of the numerical results was confirmed by verifying them with the help of a posteriori analysis.
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
NASA Astrophysics Data System (ADS)
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
Time frequency analysis of olfactory induced EEG-power change.
Schriever, Valentin Alexander; Han, Pengfei; Weise, Stefanie; Hösel, Franziska; Pellegrino, Robert; Hummel, Thomas
2017-01-01
The objective of the present study was to investigate the usefulness of time-frequency analysis (TFA) of olfactory-induced EEG change with a low-cost, portable olfactometer in the clinical investigation of smell function. A total of 78 volunteers participated. The study was composed of three parts where olfactory stimuli were presented using a custom-built olfactometer. Part I was designed to optimize the stimulus as well as the recording conditions. In part II EEG-power changes after olfactory/trigeminal stimulation were compared between healthy participants and patients with olfactory impairment. In Part III the test-retest reliability of the method was evaluated in healthy subjects. Part I indicated that the most effective paradigm for stimulus presentation was cued stimulus, with an interstimulus interval of 18-20s at a stimulus duration of 1000ms with each stimulus quality presented 60 times in blocks of 20 stimuli each. In Part II we found that central processing of olfactory stimuli analyzed by TFA differed significantly between healthy controls and patients even when controlling for age. It was possible to reliably distinguish patients with olfactory impairment from healthy individuals at a high degree of accuracy (healthy controls vs anosmic patients: sensitivity 75%; specificity 89%). In addition we could show a good test-retest reliability of TFA of chemosensory induced EEG-power changes in Part III. Central processing of olfactory stimuli analyzed by TFA reliably distinguishes patients with olfactory impairment from healthy individuals at a high degree of accuracy. Importantly this can be achieved with a simple olfactometer.
Sliding Mode Thermal Control System for Space Station Furnace Facility
NASA Technical Reports Server (NTRS)
Jackson Mark E.; Shtessel, Yuri B.
1998-01-01
The decoupled control of the nonlinear, multiinput-multioutput, and highly coupled space station furnace facility (SSFF) thermal control system is addressed. Sliding mode control theory, a subset of variable-structure control theory, is employed to increase the performance, robustness, and reliability of the SSFF's currently designed control system. This paper presents the nonlinear thermal control system description and develops the sliding mode controllers that cause the interconnected subsystems to operate in their local sliding modes, resulting in control system invariance to plant uncertainties and external and interaction disturbances. The desired decoupled flow-rate tracking is achieved by optimization of the local linear sliding mode equations. The controllers are implemented digitally and extensive simulation results are presented to show the flow-rate tracking robustness and invariance to plant uncertainties, nonlinearities, external disturbances, and variations of the system pressure supplied to the controlled subsystems.
Partially Decentralized Control Architectures for Satellite Formations
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Bauer, Frank H.
2002-01-01
In a partially decentralized control architecture, more than one but less than all nodes have supervisory capability. This paper describes an approach to choosing the number of supervisors in such au architecture, based on a reliability vs. cost trade. It also considers the implications of these results for the design of navigation systems for satellite formations that could be controlled with a partially decentralized architecture. Using an assumed cost model, analytic and simulation-based results indicate that it may be cheaper to achieve a given overall system reliability with a partially decentralized architecture containing only a few supervisors, than with either fully decentralized or purely centralized architectures. Nominally, the subset of supervisors may act as centralized estimation and control nodes for corresponding subsets of the remaining subordinate nodes, and act as decentralized estimation and control peers with respect to each other. However, in the context of partially decentralized satellite formation control, the absolute positions and velocities of each spacecraft are unique, so that correlations which make estimates using only local information suboptimal only occur through common biases and process noise. Covariance and monte-carlo analysis of a simplified system show that this lack of correlation may allow simplification of the local estimators while preserving the global optimality of the maneuvers commanded by the supervisors.
Trajectory optimization for the National Aerospace Plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1993-01-01
The objective of this second phase research is to investigate the optimal ascent trajectory for the National Aerospace Plane (NASP) from runway take-off to orbital insertion and address the unique problems associated with the hypersonic flight trajectory optimization. The trajectory optimization problem for an aerospace plane is a highly challenging problem because of the complexity involved. Previous work has been successful in obtaining sub-optimal trajectories by using energy-state approximation and time-scale decomposition techniques. But it is known that the energy-state approximation is not valid in certain portions of the trajectory. This research aims at employing full dynamics of the aerospace plane and emphasizing direct trajectory optimization methods. The major accomplishments of this research include the first-time development of an inverse dynamics approach in trajectory optimization which enables us to generate optimal trajectories for the aerospace plane efficiently and reliably, and general analytical solutions to constrained hypersonic trajectories that has wide application in trajectory optimization as well as in guidance and flight dynamics. Optimal trajectories in abort landing and ascent augmented with rocket propulsion and thrust vectoring control were also investigated. Motivated by this study, a new global trajectory optimization tool using continuous simulated annealing and a nonlinear predictive feedback guidance law have been under investigation and some promising results have been obtained, which may well lead to more significant development and application in the near future.
Research and application of key technology of electric submersible plunger pump
NASA Astrophysics Data System (ADS)
Qian, K.; Sun, Y. N.; Zheng, S.; Du, W. S.; Li, J. N.; Pei, G. Z.; Gao, Y.; Wu, N.
2018-06-01
Electric submersible plunger pump is a new generation of rodless oil production equipment, whose improvements and upgrades of key technologies are conducive to its large-scale application and reduce the cost and improve the efficiency. In this paper, the operating mechanism of the unit in-depth study, aimed at the problems existing in oilfield production, to propose an optimization method creatively, including the optimal design of a linear motor for submersible oil, development of new double-acting load-relief pump, embedded flexible closed-loop control technology, research and development of low-cost power cables. 90 oil wells were used on field application, the average pump inspection cycle is 608 days, the longest pump check cycle has exceeded 1037 days, the average power saving rate is 45.6%. Application results show that the new technology of optimization and upgrading can further improve the reliability and adaptability of electric submersible plunger pump, reduce the cost of investment.
Optimization of controlled processes in combined-cycle plant (new developments and researches)
NASA Astrophysics Data System (ADS)
Tverskoy, Yu S.; Muravev, I. K.
2017-11-01
All modern complex technical systems, including power units of TPP and nuclear power plants, work in the system-forming structure of multifunctional APCS. The development of the modern APCS mathematical support allows bringing the automation degree to the solution of complex optimization problems of equipment heat-mass-exchange processes in real time. The difficulty of efficient management of a binary power unit is related to the need to solve jointly at least three problems. The first problem is related to the physical issues of combined-cycle technologies. The second problem is determined by the criticality of the CCGT operation to changes in the regime and climatic factors. The third problem is related to a precise description of a vector of controlled coordinates of a complex technological object. To obtain a joint solution of this complex of interconnected problems, the methodology of generalized thermodynamic analysis, methods of the theory of automatic control and mathematical modeling are used. In the present report, results of new developments and studies are shown. These results allow improving the principles of process control and the automatic control systems structural synthesis of power units with combined-cycle plants that provide attainable technical and economic efficiency and operational reliability of equipment.
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
Interphase Thermomechanical Reliability and Optimization for High-Performance Ti Metal Laminates
2011-12-19
Thermomechanical Reliability and Optimization for High-Performance Ti FA9550-08-l-0015 Metal Laminates Sb. GRANT NUMBER Program Manager: Dr Joycelyn Harrison...OSR-VA-TR-2012-0202 12. DISTRIBUTION/AVAILABILITY STATEMENT A 13. SUPPLEMENTARY NOTES 14. ABSTRACT Hybrid laminated composites such as titanium...graphite (TiGr) laminates are an emerging class of structural materials with the potential to enable a new generation of efficient, high-performance
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2003-01-01
This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.
NASA Astrophysics Data System (ADS)
LI, Y.; Yang, S. H.
2017-05-01
The Antarctica astronomical telescopes work chronically on the top of the unattended South Pole, and they have only one chance to maintain every year. Due to the complexity of the optical, mechanical, and electrical systems, the telescopes are hard to be maintained and need multi-tasker expedition teams, which means an excessive awareness is essential for the reliability of the Antarctica telescopes. Based on the fault mechanism and fault mode of the main-axis control system for the equatorial Antarctica astronomical telescope AST3-3 (Antarctic Schmidt Telescopes 3-3), the method of fault tree analysis is introduced in this article, and we obtains the importance degree of the top event from the importance degree of the bottom event structure. From the above results, the hidden problems and weak links can be effectively found out, which will indicate the direction for promoting the stability of the system and optimizing the design of the system.
Space Station man-machine automation trade-off analysis
NASA Technical Reports Server (NTRS)
Zimmerman, W. F.; Bard, J.; Feinberg, A.
1985-01-01
The man machine automation tradeoff methodology presented is of four research tasks comprising the autonomous spacecraft system technology (ASST) project. ASST was established to identify and study system level design problems for autonomous spacecraft. Using the Space Station as an example spacecraft system requiring a certain level of autonomous control, a system level, man machine automation tradeoff methodology is presented that: (1) optimizes man machine mixes for different ground and on orbit crew functions subject to cost, safety, weight, power, and reliability constraints, and (2) plots the best incorporation plan for new, emerging technologies by weighing cost, relative availability, reliability, safety, importance to out year missions, and ease of retrofit. A fairly straightforward approach is taken by the methodology to valuing human productivity, it is still sensitive to the important subtleties associated with designing a well integrated, man machine system. These subtleties include considerations such as crew preference to retain certain spacecraft control functions; or valuing human integration/decision capabilities over equivalent hardware/software where appropriate.
NASA Astrophysics Data System (ADS)
Pavese, Christian; Tibaldi, Carlo; Larsen, Torben J.; Kim, Taeseong; Thomsen, Kenneth
2016-09-01
The aim is to provide a fast and reliable approach to estimate ultimate blade loads for a multidisciplinary design optimization (MDO) framework. For blade design purposes, the standards require a large amount of computationally expensive simulations, which cannot be efficiently run each cost function evaluation of an MDO process. This work describes a method that allows integrating the calculation of the blade load envelopes inside an MDO loop. Ultimate blade load envelopes are calculated for a baseline design and a design obtained after an iteration of an MDO. These envelopes are computed for a full standard design load basis (DLB) and a deterministic reduced DLB. Ultimate loads extracted from the two DLBs with the two blade designs each are compared and analyzed. Although the reduced DLB supplies ultimate loads of different magnitude, the shape of the estimated envelopes are similar to the one computed using the full DLB. This observation is used to propose a scheme that is computationally cheap, and that can be integrated inside an MDO framework, providing a sufficiently reliable estimation of the blade ultimate loading. The latter aspect is of key importance when design variables implementing passive control methodologies are included in the formulation of the optimization problem. An MDO of a 10 MW wind turbine blade is presented as an applied case study to show the efficacy of the reduced DLB concept.
Reliability-based optimization of maintenance scheduling of mechanical components under fatigue
Beaurepaire, P.; Valdebenito, M.A.; Schuëller, G.I.; Jensen, H.A.
2012-01-01
This study presents the optimization of the maintenance scheduling of mechanical components under fatigue loading. The cracks of damaged structures may be detected during non-destructive inspection and subsequently repaired. Fatigue crack initiation and growth show inherent variability, and as well the outcome of inspection activities. The problem is addressed under the framework of reliability based optimization. The initiation and propagation of fatigue cracks are efficiently modeled using cohesive zone elements. The applicability of the method is demonstrated by a numerical example, which involves a plate with two holes subject to alternating stress. PMID:23564979
NASA Astrophysics Data System (ADS)
Ravishankar, Bharani
Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.
Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih
2013-03-20
Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments.
Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih
2013-01-01
Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments. PMID:23519351
An Automated, Adaptive Framework for Optimizing Preprocessing Pipelines in Task-Based Functional MRI
Churchill, Nathan W.; Spring, Robyn; Afshin-Pour, Babak; Dong, Fan; Strother, Stephen C.
2015-01-01
BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the “pipeline”) significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard “fixed” preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each), demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets. PMID:26161667
NASA Astrophysics Data System (ADS)
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2012-09-01
The global rise in energy demands brings major obstacles to many energy organizations in providing adequate energy supply. Hence, many techniques to generate cost effective, reliable and environmentally friendly alternative energy source are being explored. One such method is the integration of photovoltaic cells, wind turbine generators and fuel-based generators, included with storage batteries. This sort of power systems are known as distributed generation (DG) power system. However, the application of DG power systems raise certain issues such as cost effectiveness, environmental impact and reliability. The modelling as well as the optimization of this DG power system was successfully performed in the previous work using Particle Swarm Optimization (PSO). The central idea of that work was to minimize cost, minimize emissions and maximize reliability (multi-objective (MO) setting) with respect to the power balance and design requirements. In this work, we introduce a fuzzy model that takes into account the uncertain nature of certain variables in the DG system which are dependent on the weather conditions (such as; the insolation and wind speed profiles). The MO optimization in a fuzzy environment was performed by applying the Hopfield Recurrent Neural Network (HNN). Analysis on the optimized results was then carried out.
Wang, Wei; Hwang, Sun Kak; Kim, Kang Lib; Lee, Ju Han; Cho, Suk Man; Park, Cheolmin
2015-05-27
The core components of a floating-gate organic thin-film transistor nonvolatile memory (OTFT-NVM) include the semiconducting channel layer, tunneling layer, floating-gate layer, and blocking layer, besides three terminal electrodes. In this study, we demonstrated OTFT-NVMs with all four constituent layers made of polymers based on consecutive spin-coating. Ambipolar charges injected and trapped in a polymer electret charge-controlling layer upon gate program and erase field successfully allowed for reliable bistable channel current levels at zero gate voltage. We have observed that the memory performance, in particular the reliability of a device, significantly depends upon the thickness of both blocking and tunneling layers, and with an optimized layer thickness and materials selection, our device exhibits a memory window of 15.4 V, on/off current ratio of 2 × 10(4), read and write endurance cycles over 100, and time-dependent data retention of 10(8) s, even when fabricated on a mechanically flexible plastic substrate.
Control of epitaxial defects for optimal AlGaN/GaN HEMT performance and reliability
NASA Astrophysics Data System (ADS)
Green, D. S.; Gibb, S. R.; Hosse, B.; Vetury, R.; Grider, D. E.; Smart, J. A.
2004-12-01
High-quality GaN epitaxy continues to be challenged by the lack of matched substrates. Threading dislocations that result from heteroepitaxy are responsible for leakage currents, trapping effects, and may adversely affect device reliability. We have studied the impact of AlN nucleation conditions on the density and character of threading dislocations on SiC substrates. Variation of the nucleation temperature, V/III ratio, and thickness are seen to have a dramatic effect on the balance between edge, screw and mixed character dislocation densities. Electrical and structural properties have been assessed by AFM and XRD on a material level and through DC and RF performance at the device level. The ratio between dislocation characteristics has been established primarily through comparison of symmetric and asymmetric XRD rocking curve widths. The effect of each dislocation type on leakage current, RF power and reliability at 2 GHz, the targeted band for cell phone infrastructure applications, is discussed.
A Cross-Layer Optimized Opportunistic Routing Scheme for Loss-and-Delay Sensitive WSNs
Xu, Xin; Yuan, Minjiao; Liu, Xiao; Cai, Zhiping; Wang, Tian
2018-01-01
In wireless sensor networks (WSNs), communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR) has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR) scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R) and COOR(P) of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R) strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P) strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a) the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b) the reliability can be improved since it is the product of the reliability of every hop of the routing path, and the count is reduced while the reliability of each hop is the same as the traditional method. After analyzing the energy consumption of the network in detail, the value of optimized transmission power in different areas is given. On the basis of a large number of experimental and theoretical analyses, the results show that the COOR scheme will increase communication reliability by 36.62–87.77%, decrease delay by 21.09–52.48%, and balance the energy consumption of 86.97% of the nodes in the WSNs. PMID:29751589
A Cross-Layer Optimized Opportunistic Routing Scheme for Loss-and-Delay Sensitive WSNs.
Xu, Xin; Yuan, Minjiao; Liu, Xiao; Liu, Anfeng; Xiong, Neal N; Cai, Zhiping; Wang, Tian
2018-05-03
In wireless sensor networks (WSNs), communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR) has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR) scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R) and COOR(P) of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R) strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P) strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a) the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b) the reliability can be improved since it is the product of the reliability of every hop of the routing path, and the count is reduced while the reliability of each hop is the same as the traditional method. After analyzing the energy consumption of the network in detail, the value of optimized transmission power in different areas is given. On the basis of a large number of experimental and theoretical analyses, the results show that the COOR scheme will increase communication reliability by 36.62⁻87.77%, decrease delay by 21.09⁻52.48%, and balance the energy consumption of 86.97% of the nodes in the WSNs.
Expósito-Rodríguez, Marino; Borges, Andrés A; Borges-Pérez, Andrés; Pérez, José A
2008-01-01
Background The elucidation of gene expression patterns leads to a better understanding of biological processes. Real-time quantitative RT-PCR has become the standard method for in-depth studies of gene expression. A biologically meaningful reporting of target mRNA quantities requires accurate and reliable normalization in order to identify real gene-specific variation. The purpose of normalization is to control several variables such as different amounts and quality of starting material, variable enzymatic efficiencies of retrotranscription from RNA to cDNA, or differences between tissues or cells in overall transcriptional activity. The validity of a housekeeping gene as endogenous control relies on the stability of its expression level across the sample panel being analysed. In the present report we describe the first systematic evaluation of potential internal controls during tomato development process to identify which are the most reliable for transcript quantification by real-time RT-PCR. Results In this study, we assess the expression stability of 7 traditional and 4 novel housekeeping genes in a set of 27 samples representing different tissues and organs of tomato plants at different developmental stages. First, we designed, tested and optimized amplification primers for real-time RT-PCR. Then, expression data from each candidate gene were evaluated with three complementary approaches based on different statistical procedures. Our analysis suggests that SGN-U314153 (CAC), SGN-U321250 (TIP41), SGN-U346908 ("Expressed") and SGN-U316474 (SAND) genes provide superior transcript normalization in tomato development studies. We recommend different combinations of these exceptionally stable housekeeping genes for suited normalization of different developmental series, including the complete tomato development process. Conclusion This work constitutes the first effort for the selection of optimal endogenous controls for quantitative real-time RT-PCR studies of gene expression during tomato development process. From our study a tool-kit of control genes emerges that outperform the traditional genes in terms of expression stability. PMID:19102748
NASA Astrophysics Data System (ADS)
Korotkova, T. I.; Popova, V. I.
2017-11-01
The generalized mathematical model of decision-making in the problem of planning and mode selection providing required heat loads in a large heat supply system is considered. The system is multilevel, decomposed into levels of main and distribution heating networks with intermediate control stages. Evaluation of the effectiveness, reliability and safety of such a complex system is carried out immediately according to several indicators, in particular pressure, flow, temperature. This global multicriteria optimization problem with constraints is decomposed into a number of local optimization problems and the coordination problem. An agreed solution of local problems provides a solution to the global multicriterion problem of decision making in a complex system. The choice of the optimum operational mode of operation of a complex heat supply system is made on the basis of the iterative coordination process, which converges to the coordinated solution of local optimization tasks. The interactive principle of multicriteria task decision-making includes, in particular, periodic adjustment adjustments, if necessary, guaranteeing optimal safety, reliability and efficiency of the system as a whole in the process of operation. The degree of accuracy of the solution, for example, the degree of deviation of the internal air temperature from the required value, can also be changed interactively. This allows to carry out adjustment activities in the best way and to improve the quality of heat supply to consumers. At the same time, an energy-saving task is being solved to determine the minimum required values of heads at sources and pumping stations.
Optimal control for wind turbine system via state-space method
NASA Astrophysics Data System (ADS)
Shanoob, Mudhafar L.
Renewable energy is becoming a fascinating research interest in future energy production because it is green and does not pollute nature. Wind energy is an excellent example of renewable resources that are evolving. Throughout the history of humanity, wind energy has been used. In ancient time, it was used to grind seeds, sailing etc. Nowadays, wind energy has been used to generate electrical power. Researchers have done a lot of research about using a wind source to generate electricity. As wind flow is not reliable, there is a challenge to get stable electricity out of this varying wind. This problem leads to the use of different control methods and the optimization of these methods to get a stable and reliable electrical energy. In this research, a wind turbine system is considered to study the transient and the steady-state stability; consisting of the aerodynamic system, drive train and generator. The Doubly Feed Induction Generator (DFIG) type generator is used in this thesis. The wind turbine system is connected to power system network. The grid is an infinite bus bar connected to a short transmission line and transformer. The generator is attached to the grid from the stator side. State-space method is used to model the wind turbine parts. The system is modeled and controlled using MATLAB/Simulation software. First, the current-mode control method (PVdq) with (PI) regulator is operated as a reference to find how the system reacts to an unexpected disturbance on the grid side or turbine side. The controller is operated with three scenarios of disruption: Disturbance-mechanical torque input, Step disturbance in the electrical torque reference and Fault Ride-through. In the simulation results, the time response and the transient stability of the system is a product of the disturbances that take a long time to settle. So, for this reason, Linear Quadratic Regulation (LQR) optimal control is utilized to solve this problem. The LQR method is designed based on using type 1 servo system that depends on the full state feedback variables and tracking error. The LQR improves the transient stability and time response of the wind turbine system in all three-disturbance scenarios. The results of both methods are deeply explained in the simulation section.
NASA Astrophysics Data System (ADS)
Long, Kim Chenming
Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.
Seizure Control in a Computational Model Using a Reinforcement Learning Stimulation Paradigm.
Nagaraj, Vivek; Lamperski, Andrew; Netoff, Theoden I
2017-11-01
Neuromodulation technologies such as vagus nerve stimulation and deep brain stimulation, have shown some efficacy in controlling seizures in medically intractable patients. However, inherent patient-to-patient variability of seizure disorders leads to a wide range of therapeutic efficacy. A patient specific approach to determining stimulation parameters may lead to increased therapeutic efficacy while minimizing stimulation energy and side effects. This paper presents a reinforcement learning algorithm that optimizes stimulation frequency for controlling seizures with minimum stimulation energy. We apply our method to a computational model called the epileptor. The epileptor model simulates inter-ictal and ictal local field potential data. In order to apply reinforcement learning to the Epileptor, we introduce a specialized reward function and state-space discretization. With the reward function and discretization fixed, we test the effectiveness of the temporal difference reinforcement learning algorithm (TD(0)). For periodic pulsatile stimulation, we derive a relation that describes, for any stimulation frequency, the minimal pulse amplitude required to suppress seizures. The TD(0) algorithm is able to identify parameters that control seizures quickly. Additionally, our results show that the TD(0) algorithm refines the stimulation frequency to minimize stimulation energy thereby converging to optimal parameters reliably. An advantage of the TD(0) algorithm is that it is adaptive so that the parameters necessary to control the seizures can change over time. We show that the algorithm can converge on the optimal solution in simulation with slow and fast inter-seizure intervals.
Optimization-Based Management of Energy Systems
2011-05-11
Power [kW] F u e l co n su m p tio n [g a l/h ] 50 kW ~45 kWh 10 Energy Management Framework: Dealing with Uncertainties Test Cases used to exploit...collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE MAY 2011 2. REPORT TYPE 3. DATES COVERED 00-00...served under all operating conditions. ‘Customizable’ power quality and reliability Seamless transition between islanding and off-grid operation
Gomaa Haroun, A H; Li, Yin-Ya
2017-11-01
In the fast developing world nowadays, load frequency control (LFC) is considered to be a most significant role for providing the power supply with good quality in the power system. To deliver a reliable power, LFC system requires highly competent and intelligent control technique. Hence, in this article, a novel hybrid fuzzy logic intelligent proportional-integral-derivative (FLiPID) controller has been proposed for LFC of interconnected multi-area power systems. A four-area interconnected thermal power system incorporated with physical constraints and boiler dynamics is considered and the adjustable parameters of the FLiPID controller are optimized using particle swarm optimization (PSO) scheme employing an integral square error (ISE) criterion. The proposed method has been established to enhance the power system performances as well as to reduce the oscillations of uncertainties due to variations in the system parameters and load perturbations. The supremacy of the suggested method is demonstrated by comparing the simulation results with some recently reported heuristic methods such as fuzzy logic proportional-integral (FLPI) and intelligent proportional-integral-derivative (PID) controllers for the same electrical power system. the investigations showed that the FLiPID controller provides a better dynamic performance and outperform compared to the other approaches in terms of the settling time, and minimum undershoots of the frequency as well as tie-line power flow deviations following a perturbation, in addition to perform appropriate settlement of integral absolute error (IAE). Finally, the sensitivity analysis of the plant is inspected by varying the system parameters and operating load conditions from their nominal values. It is observed that the suggested controller based optimization algorithm is robust and perform satisfactorily with the variations in operating load condition, system parameters and load pattern. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
A neuro approach to solve fuzzy Riccati differential equations
NASA Astrophysics Data System (ADS)
Shahrir, Mohammad Shazri; Kumaresan, N.; Kamali, M. Z. M.; Ratnavelu, Kurunathan
2015-10-01
There are many applications of optimal control theory especially in the area of control systems in engineering. In this paper, fuzzy quadratic Riccati differential equation is estimated using neural networks (NN). Previous works have shown reliable results using Runge-Kutta 4th order (RK4). The solution can be achieved by solving the 1st Order Non-linear Differential Equation (ODE) that is found commonly in Riccati differential equation. Research has shown improved results relatively to the RK4 method. It can be said that NN approach shows promising results with the advantage of continuous estimation and improved accuracy that can be produced over RK4.
A neuro approach to solve fuzzy Riccati differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shahrir, Mohammad Shazri, E-mail: mshazri@gmail.com; Telekom Malaysia, R&D TM Innovation Centre, LingkaranTeknokrat Timur, 63000 Cyberjaya, Selangor; Kumaresan, N., E-mail: drnk2008@gmail.com
There are many applications of optimal control theory especially in the area of control systems in engineering. In this paper, fuzzy quadratic Riccati differential equation is estimated using neural networks (NN). Previous works have shown reliable results using Runge-Kutta 4th order (RK4). The solution can be achieved by solving the 1st Order Non-linear Differential Equation (ODE) that is found commonly in Riccati differential equation. Research has shown improved results relatively to the RK4 method. It can be said that NN approach shows promising results with the advantage of continuous estimation and improved accuracy that can be produced over RK4.
Modeling, simulation and control for a cryogenic fluid management facility, preliminary report
NASA Technical Reports Server (NTRS)
Turner, Max A.; Vanbuskirk, P. D.
1986-01-01
The synthesis of a control system for a cryogenic fluid management facility was studied. The severe demand for reliability as well as instrumentation and control unique to the Space Station environment are prime considerations. Realizing that the effective control system depends heavily on quantitative description of the facility dynamics, a methodology for process identification and parameter estimation is postulated. A block diagram of the associated control system is also produced. Finally, an on-line adaptive control strategy is developed utilizing optimization of the velocity form control parameters (proportional gains, integration and derivative time constants) in appropriate difference equations for direct digital control. Of special concern are the communications, software and hardware supporting interaction between the ground and orbital systems. It is visualized that specialist in the OSI/ISO utilizing the Ada programming language will influence further development, testing and validation of the simplistic models presented here for adaptation to the actual flight environment.
System Design under Uncertainty: Evolutionary Optimization of the Gravity Probe-B Spacecraft
NASA Technical Reports Server (NTRS)
Pullen, Samuel P.; Parkinson, Bradford W.
1994-01-01
This paper discusses the application of evolutionary random-search algorithms (Simulated Annealing and Genetic Algorithms) to the problem of spacecraft design under performance uncertainty. Traditionally, spacecraft performance uncertainty has been measured by reliability. Published algorithms for reliability optimization are seldom used in practice because they oversimplify reality. The algorithm developed here uses random-search optimization to allow us to model the problem more realistically. Monte Carlo simulations are used to evaluate the objective function for each trial design solution. These methods have been applied to the Gravity Probe-B (GP-B) spacecraft being developed at Stanford University for launch in 1999, Results of the algorithm developed here for GP-13 are shown, and their implications for design optimization by evolutionary algorithms are discussed.
Fateen, Seif-Eddeen K.; Bonilla-Petriciolet, Adrian
2014-01-01
The search for reliable and efficient global optimization algorithms for solving phase stability and phase equilibrium problems in applied thermodynamics is an ongoing area of research. In this study, we evaluated and compared the reliability and efficiency of eight selected nature-inspired metaheuristic algorithms for solving difficult phase stability and phase equilibrium problems. These algorithms are the cuckoo search (CS), intelligent firefly (IFA), bat (BA), artificial bee colony (ABC), MAKHA, a hybrid between monkey algorithm and krill herd algorithm, covariance matrix adaptation evolution strategy (CMAES), magnetic charged system search (MCSS), and bare bones particle swarm optimization (BBPSO). The results clearly showed that CS is the most reliable of all methods as it successfully solved all thermodynamic problems tested in this study. CS proved to be a promising nature-inspired optimization method to perform applied thermodynamic calculations for process design. PMID:24967430
Fateen, Seif-Eddeen K; Bonilla-Petriciolet, Adrian
2014-01-01
The search for reliable and efficient global optimization algorithms for solving phase stability and phase equilibrium problems in applied thermodynamics is an ongoing area of research. In this study, we evaluated and compared the reliability and efficiency of eight selected nature-inspired metaheuristic algorithms for solving difficult phase stability and phase equilibrium problems. These algorithms are the cuckoo search (CS), intelligent firefly (IFA), bat (BA), artificial bee colony (ABC), MAKHA, a hybrid between monkey algorithm and krill herd algorithm, covariance matrix adaptation evolution strategy (CMAES), magnetic charged system search (MCSS), and bare bones particle swarm optimization (BBPSO). The results clearly showed that CS is the most reliable of all methods as it successfully solved all thermodynamic problems tested in this study. CS proved to be a promising nature-inspired optimization method to perform applied thermodynamic calculations for process design.
[Reliability of a positron emission tomography system (CTI:PT931/04-12)].
Watanuki, Shoichi; Ishii, Keizo; Itoh, Masatoshi; Orihara, Hikonojyo
2002-05-01
The maintenance data of a PET system (PT931/04-12 CTI Inc.) was analyzed to evaluate its reliability. We examined whether the initial performance for the system resolution and efficiency is kept. The reliability of the PET system was evaluated from the value of MTTF (mean time to failure) and MTBF (mean time between failures) for each part of the system obtained from the maintenance data for 13 years. The initial performance was kept for the resolution, but the efficiency decreased to 72% of the initial value. The 83% of the troubles of the system was for detector block (DB) and DB control module (BC). The MTTF of DB and BC were 2,733 and 3,314 days, and the MTBF of DB and BC per detector ring were 38 and 114 days. The MTBF of the system was 23 days. We found seasonal dependence for the number of troubles of DB and BC. This means that the trouble may be related the humidity. The reliability of the PET system strongly depends on the MTBF of DB and BC. The improvement in quality of these parts and optimization of the environment in operation may increase the reliability of the PET system. For the popularization of PET, it is effective to evaluate the reliability of the system and to show it to the users.
Tan, Jin; Zhang, Yingchen
2017-02-02
With increasing penetrations of wind generation on electric grids, wind power plants (WPPs) are encouraged to provide frequency ancillary services (FAS); however, it is a challenge to ensure that variable wind generation can reliably provide these ancillary services. This paper proposes using a battery energy storage system (BESS) to ensure the WPPs' commitment to FAS. This method also focuses on reducing the BESS's size and extending its lifetime. In this paper, a state-machine-based coordinated control strategy is developed to utilize a BESS to support the obliged FAS of a WPP (including both primary and secondary frequency control). This method takesmore » into account the operational constraints of the WPP (e.g., real-time reserve) and the BESS (e.g., state of charge [SOC], charge and discharge rate) to provide reliable FAS. Meanwhile, an adaptive SOC-feedback control is designed to maintain SOC at the optimal value as much as possible and thus reduce the size and extend the lifetime of the BESS. In conclusion, the effectiveness of the control strategy is validated with an innovative, multi-area, interconnected power system simulation platform that can mimic realistic power systems operation and control by simulating real-time economic dispatch, regulating reserve scheduling, multi-area automatic generation control, and generators' dynamic response.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Jin; Zhang, Yingchen
With increasing penetrations of wind generation on electric grids, wind power plants (WPPs) are encouraged to provide frequency ancillary services (FAS); however, it is a challenge to ensure that variable wind generation can reliably provide these ancillary services. This paper proposes using a battery energy storage system (BESS) to ensure the WPPs' commitment to FAS. This method also focuses on reducing the BESS's size and extending its lifetime. In this paper, a state-machine-based coordinated control strategy is developed to utilize a BESS to support the obliged FAS of a WPP (including both primary and secondary frequency control). This method takesmore » into account the operational constraints of the WPP (e.g., real-time reserve) and the BESS (e.g., state of charge [SOC], charge and discharge rate) to provide reliable FAS. Meanwhile, an adaptive SOC-feedback control is designed to maintain SOC at the optimal value as much as possible and thus reduce the size and extend the lifetime of the BESS. In conclusion, the effectiveness of the control strategy is validated with an innovative, multi-area, interconnected power system simulation platform that can mimic realistic power systems operation and control by simulating real-time economic dispatch, regulating reserve scheduling, multi-area automatic generation control, and generators' dynamic response.« less
Bam, L; McLaren, Z M; Coetzee, E; von Leipzig, K H
2017-10-01
The under-performance of supply chains presents a significant hindrance to disease control in developing countries. Stock-outs of essential medicines lead to treatment interruption which can force changes in patient drug regimens, drive drug resistance and increase mortality. This study is one of few to quantitatively evaluate the effectiveness of supply chain policies in reducing shortages and costs. This study develops a systems dynamics simulation model of the downstream supply chain for amikacin, a second-line tuberculosis drug using 10 years of South African data. We evaluate current supply chain performance in terms of reliability, responsiveness and agility, following the widely-used Supply Chain Operation Reference framework. We simulate 141 scenarios that represent different combinations of supplier characteristics, inventory management strategies and demand forecasting methods to identify the Pareto optimal set of management policies that jointly minimize the number of shortages and total cost. Despite long supplier lead times and unpredictable demand, the amikacin supply chain is 98% reliable and agile enough to accommodate a 20% increase in demand without a shortage. However, this is accomplished by overstocking amikacin by 167%, which incurs high holding costs. The responsiveness of suppliers is low: only 57% of orders are delivered to the central provincial drug depot within one month. We identify three Pareto optimal safety stock management policies. Short supplier lead time can produce Pareto optimal outcomes even in the absence of other optimal policies. This study produces concrete, actionable guidelines to cost-effectively reduce stock-outs by implementing optimal supply chain policies. Preferentially selecting drug suppliers with short lead times accommodates unexpected changes in demand. Optimal supply chain management should be an essential component of national policy to reduce the mortality rate. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Computationally efficient stochastic optimization using multiple realizations
NASA Astrophysics Data System (ADS)
Bayer, P.; Bürger, C. M.; Finkel, M.
2008-02-01
The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.
Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L
2012-10-01
Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.
Coping with occupational stress: the role of optimism and coping flexibility.
Reed, Daniel J
2016-01-01
The current study aimed at measuring whether coping flexibility is a reliable and valid construct in a UK sample and subsequently investigating the association between coping flexibility, optimism, and psychological health - measured by perceived stress and life satisfaction. A UK university undergraduate student sample (N=95) completed an online questionnaire. The study is among the first to examine the validity and reliability of the English version of a scale measuring coping flexibility in a Western population and is also the first to investigate the association between optimism and coping flexibility. The results revealed that the scale had good reliability overall; however, factor analysis revealed no support for the existing two-factor structure of the scale. Coping flexibility and optimism were found to be strongly correlated, and hierarchical regression analyses revealed that the interaction between them predicted a large proportion of the variance in both perceived stress and life satisfaction. In addition, structural equation modeling revealed that optimism completely mediated the relationship between coping flexibility and both perceived stress and life satisfaction. The findings add to the occupational stress literature to further our understanding of how optimism is important in psychological health. Furthermore, given that optimism is a personality trait, and consequently relatively stable, the study also provides preliminary support for the potential of targeting coping flexibility to improve psychological health in Western populations. These findings must be replicated, and further analyses of the English version of the Coping Flexibility Scale are needed.
NASA Astrophysics Data System (ADS)
Jung, Do Yang; Lee, Baek Haeng; Kim, Sun Wook
Electric vehicle (EV) performance is very dependent on traction batteries. For developing electric vehicles with high performance and good reliability, the traction batteries have to be managed to obtain maximum performance under various operating conditions. Enhancement of battery performance can be accomplished by implementing a battery management system (BMS) that plays an important role in optimizing the control mechanism of charge and discharge of the batteries as well as monitoring the battery status. In this study, a BMS has been developed for maximizing the use of Ni-MH batteries in electric vehicles. This system performs several tasks: the control of charging and discharging, overcharge and over-discharge protection, the calculation and display of state-of-charge (SOC), safety, and thermal management. The BMS is installed in and tested in a DEV5-5 electric vehicle developed by Daewoo Motor Co. and the Institute for Advanced Engineering in Korea. Eighteen modules of a Panasonic nickel-metal hydride (Ni-MH) battery, 12 V, 95 A h, are used in the DEV5-5. High accuracy within a range of 3% and good reliability are obtained. The BMS can also improve the performance and cycle-life of the Ni-MH battery peak, as well as the reliability and the safety of the electric vehicles.
NASA Astrophysics Data System (ADS)
Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin
2014-07-01
Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.
Identification of minimal parameters for optimal suppression of chaos in dissipative driven systems.
Martínez, Pedro J; Euzzor, Stefano; Gallas, Jason A C; Meucci, Riccardo; Chacón, Ricardo
2017-12-21
Taming chaos arising from dissipative non-autonomous nonlinear systems by applying additional harmonic excitations is a reliable and widely used procedure nowadays. But the suppressory effectiveness of generic non-harmonic periodic excitations continues to be a significant challenge both to our theoretical understanding and in practical applications. Here we show how the effectiveness of generic suppressory excitations is optimally enhanced when the impulse transmitted by them (time integral over two consecutive zeros) is judiciously controlled in a not obvious way. Specifically, the effective amplitude of the suppressory excitation is minimal when the impulse transmitted is maximum. Also, by lowering the impulse transmitted one obtains larger regularization areas in the initial phase difference-amplitude control plane, the price to be paid being the requirement of larger amplitudes. These two remarkable features, which constitute our definition of optimum control, are demonstrated experimentally by means of an analog version of a paradigmatic model, and confirmed numerically by simulations of such a damped driven system including the presence of noise. Our theoretical analysis shows that the controlling effect of varying the impulse is due to a subsequent variation of the energy transmitted by the suppressory excitation.
NASA Technical Reports Server (NTRS)
Orme, John S.; Gilyard, Glenn B.
1992-01-01
Integrated engine-airframe optimal control technology may significantly improve aircraft performance. This technology requires a reliable and accurate parameter estimator to predict unmeasured variables. To develop this technology base, NASA Dryden Flight Research Facility (Edwards, CA), McDonnell Aircraft Company (St. Louis, MO), and Pratt & Whitney (West Palm Beach, FL) have developed and flight-tested an adaptive performance seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system. This paper presents flight and ground test evaluations of the propulsion system parameter estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Laman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation, especially in trends, between the performance seeking control estimated and measured thrust.
NASA Astrophysics Data System (ADS)
Ghosh, Nabendu; Kumar, Pradip; Nandi, Goutam
2016-10-01
Welding input process parameters play a very significant role in determining the quality of the welded joint. Only by properly controlling every element of the process can product quality be controlled. For better quality of MIG welding of Ferritic stainless steel AISI 409, precise control of process parameters, parametric optimization of the process parameters, prediction and control of the desired responses (quality indices) etc., continued and elaborate experiments, analysis and modeling are needed. A data of knowledge - base may thus be generated which may be utilized by the practicing engineers and technicians to produce good quality weld more precisely, reliably and predictively. In the present work, X-ray radiographic test has been conducted in order to detect surface and sub-surface defects of weld specimens made of Ferritic stainless steel. The quality of the weld has been evaluated in terms of yield strength, ultimate tensile strength and percentage of elongation of the welded specimens. The observed data have been interpreted, discussed and analyzed by considering ultimate tensile strength ,yield strength and percentage elongation combined with use of Grey-Taguchi methodology.
Koo, Terry K; Cohen, Jeffrey H; Zheng, Yongping
2011-11-01
Soft tissue exhibits nonlinear stress-strain behavior under compression. Characterizing its nonlinear elasticity may aid detection, diagnosis, and treatment of soft tissue abnormality. The purposes of this study were to develop a rate-controlled Mechano-Acoustic Indentor System and a corresponding finite element optimization method to extract nonlinear elastic parameters of soft tissue and evaluate its test-retest reliability. An indentor system using a linear actuator to drive a force-sensitive probe with a tip-mounted ultrasound transducer was developed. Twenty independent sites at the upper lateral quadrant of the buttock from 11 asymptomatic subjects (7 men and 4 women from a chiropractic college) were indented at 6% per second for 3 sessions, each consisting of 5 trials. Tissue thickness, force at 25% deformation, and area under the load-deformation curve from 0% to 25% deformation were calculated. Optimized hyperelastic parameters of the soft tissue were calculated with a finite element model using a first-order Ogden material model. Load-deformation response on a standardized block was then simulated, and the corresponding area and force parameters were calculated. Between-trials repeatability and test-retest reliability of each parameter were evaluated using coefficients of variation and intraclass correlation coefficients, respectively. Load-deformation responses were highly reproducible under repeated measurements. Coefficients of variation of tissue thickness, area under the load-deformation curve from 0% to 25% deformation, and force at 25% deformation averaged 0.51%, 2.31%, and 2.23%, respectively. Intraclass correlation coefficients ranged between 0.959 and 0.999, indicating excellent test-retest reliability. The automated Mechano-Acoustic Indentor System and its corresponding optimization technique offers a viable technology to make in vivo measurement of the nonlinear elastic properties of soft tissue. This technology showed excellent between-trials repeatability and test-retest reliability with potential to quantify the effects of a wide variety of manual therapy techniques on the soft tissue elastic properties. Copyright © 2011 National University of Health Sciences. Published by Mosby, Inc. All rights reserved.
The T/R modules for phased-array antennas
NASA Astrophysics Data System (ADS)
Peignet, Colette; Mancuso, Yves; Resneau, J. Claude
1990-09-01
The concept of phased array radar is critically dependent on the availability of compact, reliable and low power consuming Transmitter/Receiver (T/R) modules. An overview is given on two major programs actually at development stage within the Thomson group and on three major development axis (electrical concept optimization, packaging, and size reduction). The technical feasibility of the concept was proven and the three major axis were enlightened, based on reliability, power added efficiency, and RF tests optimization.
A hybrid Jaya algorithm for reliability-redundancy allocation problems
NASA Astrophysics Data System (ADS)
Ghavidel, Sahand; Azizivahed, Ali; Li, Li
2018-04-01
This article proposes an efficient improved hybrid Jaya algorithm based on time-varying acceleration coefficients (TVACs) and the learning phase introduced in teaching-learning-based optimization (TLBO), named the LJaya-TVAC algorithm, for solving various types of nonlinear mixed-integer reliability-redundancy allocation problems (RRAPs) and standard real-parameter test functions. RRAPs include series, series-parallel, complex (bridge) and overspeed protection systems. The search power of the proposed LJaya-TVAC algorithm for finding the optimal solutions is first tested on the standard real-parameter unimodal and multi-modal functions with dimensions of 30-100, and then tested on various types of nonlinear mixed-integer RRAPs. The results are compared with the original Jaya algorithm and the best results reported in the recent literature. The optimal results obtained with the proposed LJaya-TVAC algorithm provide evidence for its better and acceptable optimization performance compared to the original Jaya algorithm and other reported optimal results.
Scheduling for energy and reliability management on multiprocessor real-time systems
NASA Astrophysics Data System (ADS)
Qi, Xuan
Scheduling algorithms for multiprocessor real-time systems have been studied for years with many well-recognized algorithms proposed. However, it is still an evolving research area and many problems remain open due to their intrinsic complexities. With the emergence of multicore processors, it is necessary to re-investigate the scheduling problems and design/develop efficient algorithms for better system utilization, low scheduling overhead, high energy efficiency, and better system reliability. Focusing cluster schedulings with optimal global schedulers, we study the utilization bound and scheduling overhead for a class of cluster-optimal schedulers. Then, taking energy/power consumption into consideration, we developed energy-efficient scheduling algorithms for real-time systems, especially for the proliferating embedded systems with limited energy budget. As the commonly deployed energy-saving technique (e.g. dynamic voltage frequency scaling (DVFS)) will significantly affect system reliability, we study schedulers that have intelligent mechanisms to recuperate system reliability to satisfy the quality assurance requirements. Extensive simulation is conducted to evaluate the performance of the proposed algorithms on reduction of scheduling overhead, energy saving, and reliability improvement. The simulation results show that the proposed reliability-aware power management schemes could preserve the system reliability while still achieving substantial energy saving.
Reliability Based Design for a Raked Wing Tip of an Airframe
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2011-01-01
A reliability-based optimization methodology has been developed to design the raked wing tip of the Boeing 767-400 extended range airliner made of composite and metallic materials. Design is formulated for an accepted level of risk or reliability. The design variables, weight and the constraints became functions of reliability. Uncertainties in the load, strength and the material properties, as well as the design variables, were modeled as random parameters with specified distributions, like normal, Weibull or Gumbel functions. The objective function and constraint, or a failure mode, became derived functions of the risk-level. Solution to the problem produced the optimum design with weight, variables and constraints as a function of the risk-level. Optimum weight versus reliability traced out an inverted-S shaped graph. The center of the graph corresponded to a 50 percent probability of success, or one failure in two samples. Under some assumptions, this design would be quite close to the deterministic optimum solution. The weight increased when reliability exceeded 50 percent, and decreased when the reliability was compromised. A design could be selected depending on the level of risk acceptable to a situation. The optimization process achieved up to a 20-percent reduction in weight over traditional design.
Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing
Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang
2018-01-01
Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, feature extraction algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system. PMID:29462855
Rah, Jeong-Eun; Shin, Dongho; Oh, Do Hoon; Kim, Tae Hyun; Kim, Gwe-Ya
2014-09-01
To evaluate and improve the reliability of proton quality assurance (QA) processes and, to provide an optimal customized tolerance level using the statistical process control (SPC) methodology. The authors investigated the consistency check of dose per monitor unit (D/MU) and range in proton beams to see whether it was within the tolerance level of the daily QA process. This study analyzed the difference between the measured and calculated ranges along the central axis to improve the patient-specific QA process in proton beams by using process capability indices. The authors established a customized tolerance level of ±2% for D/MU and ±0.5 mm for beam range in the daily proton QA process. In the authors' analysis of the process capability indices, the patient-specific range measurements were capable of a specification limit of ±2% in clinical plans. SPC methodology is a useful tool for customizing the optimal QA tolerance levels and improving the quality of proton machine maintenance, treatment delivery, and ultimately patient safety.
Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing.
Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang
2018-02-15
Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED light target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, direction location algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system.
Establishment of optimized MDCK cell lines for reliable efflux transport studies.
Gartzke, Dominik; Fricker, Gert
2014-04-01
Madin-Darby canine kidney (MDCK) cells transfected with human MDR1 gene (MDCK-MDR1) encoding for P-glycoprotein (hPgp, ABCB1) are widely used for transport studies to identify drug candidates as substrates of this efflux protein. Therefore, it is necessary to rely on constant and comparable expression levels of Pgp to avoid false negative or positive results. We generated a cell line with homogenously high and stable expression of hPgp through sorting single clones from a MDCK-MDR1 cell pool using fluorescence-activated cell sorting (FACS). To obtain control cell lines for evaluation of cross-interactions with endogenous canine Pgp (cPgp) wild-type cells were sorted with a low expression pattern of cPgp in comparison with the MDCK-MDR1. Expression of other transporters was also characterized in both cell lines by quantitative real-time PCR and Western blot. Pgp function was investigated applying the Calcein-AM assay as well as bidirectional transport assays using (3) H-Digoxin, (3) H-Vinblastine, and (3) H-Quinidine as substrates. Generated MDCK-MDR1 cell lines showed high expression of hPgp. Control MDCK-WT cells were optimized in showing a comparable expression level of cPgp in comparison with MDCK-MDR1 cell lines. Generated cell lines showed higher and more selective Pgp transport compared with parental cells. Therefore, they provide a significant improvement in the performance of efflux studies yielding more reliable results. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Zhou, Fangbin; Zhou, Yaying; Yang, Ming; Wen, Jinli; Dong, Jun; Tan, Wenyong
2018-01-01
Circulating endothelial cells (CECs) and their subpopulations could be potential novel biomarkers for various malignancies. However, reliable enumerable methods are warranted to further improve their clinical utility. This study aimed to optimize a flow cytometric method (FCM) assay for CECs and subpopulations in peripheral blood for patients with solid cancers. An FCM assay was used to detect and identify CECs. A panel of 60 blood samples, including 44 metastatic cancer patients and 16 healthy controls, were used in this study. Some key issues of CEC enumeration, including sample material and anticoagulant selection, optimal titration of antibodies, lysis/wash procedures of blood sample preparation, conditions of sample storage, sufficient cell events to enhance the signal, fluorescence-minus-one controls instead of isotype controls to reduce background noise, optimal selection of cell surface markers, and evaluating the reproducibility of our method, were integrated and investigated. Wilcoxon and Mann-Whitney U tests were used to determine statistically significant differences. In this validation study, we refined a five-color FCM method to detect CECs and their subpopulations in peripheral blood of patients with solid tumors. Several key technical issues regarding preanalytical elements, FCM data acquisition, and analysis were addressed. Furthermore, we clinically validated the utility of our method. The baseline levels of mature CECs, endothelial progenitor cells, and activated CECs were higher in cancer patients than healthy subjects ( P <0.01). However, there was no significant difference in resting CEC levels between healthy subjects and cancer patients ( P =0.193). We integrated and comprehensively addressed significant technical issues found in previously published assays and validated the reproducibility and sensitivity of our proposed method. Future work is required to explore the potential of our optimized method in clinical oncologic applications.
Li, Ning; Cürüklü, Baran; Bastos, Joaquim; Sucasas, Victor; Fernandez, Jose Antonio Sanchez; Rodriguez, Jonathan
2017-01-01
The aim of the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs) project is to make autonomous underwater vehicles (AUVs), remote operated vehicles (ROVs) and unmanned surface vehicles (USVs) more accessible and useful. To achieve cooperation and communication between different AUVs, these must be able to exchange messages, so an efficient and reliable communication network is necessary for SWARMs. In order to provide an efficient and reliable communication network for mission execution, one of the important and necessary issues is the topology control of the network of AUVs that are cooperating underwater. However, due to the specific properties of an underwater AUV cooperation network, such as the high mobility of AUVs, large transmission delays, low bandwidth, etc., the traditional topology control algorithms primarily designed for terrestrial wireless sensor networks cannot be used directly in the underwater environment. Moreover, these algorithms, in which the nodes adjust their transmission power once the current transmission power does not equal an optimal one, are costly in an underwater cooperating AUV network. Considering these facts, in this paper, we propose a Probabilistic Topology Control (PTC) algorithm for an underwater cooperating AUV network. In PTC, when the transmission power of an AUV is not equal to the optimal transmission power, then whether the transmission power needs to be adjusted or not will be determined based on the AUV’s parameters. Each AUV determines their own transmission power adjustment probability based on the parameter deviations. The larger the deviation, the higher the transmission power adjustment probability is, and vice versa. For evaluating the performance of PTC, we combine the PTC algorithm with the Fuzzy logic Topology Control (FTC) algorithm and compare the performance of these two algorithms. The simulation results have demonstrated that the PTC is efficient at reducing the transmission power adjustment ratio while improving the network performance. PMID:28471387
Li, Ning; Cürüklü, Baran; Bastos, Joaquim; Sucasas, Victor; Fernandez, Jose Antonio Sanchez; Rodriguez, Jonathan
2017-05-04
The aim of the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs) project is to make autonomous underwater vehicles (AUVs), remote operated vehicles (ROVs) and unmanned surface vehicles (USVs) more accessible and useful. To achieve cooperation and communication between different AUVs, these must be able to exchange messages, so an efficient and reliable communication network is necessary for SWARMs. In order to provide an efficient and reliable communication network for mission execution, one of the important and necessary issues is the topology control of the network of AUVs that are cooperating underwater. However, due to the specific properties of an underwater AUV cooperation network, such as the high mobility of AUVs, large transmission delays, low bandwidth, etc., the traditional topology control algorithms primarily designed for terrestrial wireless sensor networks cannot be used directly in the underwater environment. Moreover, these algorithms, in which the nodes adjust their transmission power once the current transmission power does not equal an optimal one, are costly in an underwater cooperating AUV network. Considering these facts, in this paper, we propose a Probabilistic Topology Control (PTC) algorithm for an underwater cooperating AUV network. In PTC, when the transmission power of an AUV is not equal to the optimal transmission power, then whether the transmission power needs to be adjusted or not will be determined based on the AUV's parameters. Each AUV determines their own transmission power adjustment probability based on the parameter deviations. The larger the deviation, the higher the transmission power adjustment probability is, and vice versa. For evaluating the performance of PTC, we combine the PTC algorithm with the Fuzzy logic Topology Control (FTC) algorithm and compare the performance of these two algorithms. The simulation results have demonstrated that the PTC is efficient at reducing the transmission power adjustment ratio while improving the network performance.
Nanowire growth process modeling and reliability models for nanodevices
NASA Astrophysics Data System (ADS)
Fathi Aghdam, Faranak
Nowadays, nanotechnology is becoming an inescapable part of everyday life. The big barrier in front of its rapid growth is our incapability of producing nanoscale materials in a reliable and cost-effective way. In fact, the current yield of nano-devices is very low (around 10 %), which makes fabrications of nano-devices very expensive and uncertain. To overcome this challenge, the first and most important step is to investigate how to control nano-structure synthesis variations. The main directions of reliability research in nanotechnology can be classified either from a material perspective or from a device perspective. The first direction focuses on restructuring materials and/or optimizing process conditions at the nano-level (nanomaterials). The other direction is linked to nano-devices and includes the creation of nano-electronic and electro-mechanical systems at nano-level architectures by taking into account the reliability of future products. In this dissertation, we have investigated two topics on both nano-materials and nano-devices. In the first research work, we have studied the optimization of one of the most important nanowire growth processes using statistical methods. Research on nanowire growth with patterned arrays of catalyst has shown that the wire-to-wire spacing is an important factor affecting the quality of resulting nanowires. To improve the process yield and the length uniformity of fabricated nanowires, it is important to reduce the resource competition between nanowires during the growth process. We have proposed a physical-statistical nanowire-interaction model considering the shadowing effect and shared substrate diffusion area to determine the optimal pitch that would ensure the minimum competition between nanowires. A sigmoid function is used in the model, and the least squares estimation method is used to estimate the model parameters. The estimated model is then used to determine the optimal spatial arrangement of catalyst arrays. This work is an early attempt that uses a physical-statistical modeling approach to studying selective nanowire growth for the improvement of process yield. In the second research work, the reliability of nano-dielectrics is investigated. As electronic devices get smaller, reliability issues pose new challenges due to unknown underlying physics of failure (i.e., failure mechanisms and modes). This necessitates new reliability analysis approaches related to nano-scale devices. One of the most important nano-devices is the transistor that is subject to various failure mechanisms. Dielectric breakdown is known to be the most critical one and has become a major barrier for reliable circuit design in nano-scale. Due to the need for aggressive downscaling of transistors, dielectric films are being made extremely thin, and this has led to adopting high permittivity (k) dielectrics as an alternative to widely used SiO2 in recent years. Since most time-dependent dielectric breakdown test data on bilayer stacks show significant deviations from a Weibull trend, we have proposed two new approaches to modeling the time to breakdown of bi-layer high-k dielectrics. In the first approach, we have used a marked space-time self-exciting point process to model the defect generation rate. A simulation algorithm is used to generate defects within the dielectric space, and an optimization algorithm is employed to minimize the Kullback-Leibler divergence between the empirical distribution obtained from the real data and the one based on the simulated data to find the best parameter values and to predict the total time to failure. The novelty of the presented approach lies in using a conditional intensity for trap generation in dielectric that is a function of time, space and size of the previous defects. In addition, in the second approach, a k-out-of-n system framework is proposed to estimate the total failure time after the generation of more than one soft breakdown.
NASA Astrophysics Data System (ADS)
Liu, Yuan; Wang, Mingqiang; Ning, Xingyao
2018-02-01
Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.
NASA Astrophysics Data System (ADS)
Andriushin, A. V.; Zverkov, V. P.; Kuzishchin, V. F.; Ryzhkov, O. S.; Sabanin, V. R.
2017-11-01
The research and setting results of steam pressure in the main steam collector “Do itself” automatic control system (ACS) with high-speed feedback on steam pressure in the turbine regulating stage are presented. The ACS setup is performed on the simulation model of the controlled object developed for this purpose with load-dependent static and dynamic characteristics and a non-linear control algorithm with pulse control of the turbine main servomotor. A method for tuning nonlinear ACS with a numerical algorithm for multiparametric optimization and a procedure for separate dynamic adjustment of control devices in a two-loop ACS are proposed and implemented. It is shown that the nonlinear ACS adjusted with the proposed method with the regulators constant parameters ensures reliable and high-quality operation without the occurrence of oscillations in the transient processes the operating range of the turbine loads.
Modeling and Simulation of Bus Dispatching Policy for Timed Transfers on Signalized Networks
NASA Astrophysics Data System (ADS)
Cho, Hsun-Jung; Lin, Guey-Shii
2007-12-01
The major work of this study is to formulate the system cost functions and to integrate the bus dispatching policy with signal control. The integrated model mainly includes the flow dispersion model for links, signal control model for nodes, and dispatching control model for transfer terminals. All such models are inter-related for transfer operations in one-center transit network. The integrated model that combines dispatching policies with flexible signal control modes can be applied to assess the effectiveness of transfer operations. It is found that, if bus arrival information is reliable, an early dispatching decision made at the mean bus arrival times is preferable. The costs for coordinated operations with slack times are relatively low at the optimal common headway when applying adaptive route control. Based on such findings, a threshold function of bus headway for justifying an adaptive signal route control under various time values of auto drivers is developed.
A Simulation Optimization Approach to Epidemic Forecasting
Nsoesie, Elaine O.; Beckman, Richard J.; Shashaani, Sara; Nagaraj, Kalyani S.; Marathe, Madhav V.
2013-01-01
Reliable forecasts of influenza can aid in the control of both seasonal and pandemic outbreaks. We introduce a simulation optimization (SIMOP) approach for forecasting the influenza epidemic curve. This study represents the final step of a project aimed at using a combination of simulation, classification, statistical and optimization techniques to forecast the epidemic curve and infer underlying model parameters during an influenza outbreak. The SIMOP procedure combines an individual-based model and the Nelder-Mead simplex optimization method. The method is used to forecast epidemics simulated over synthetic social networks representing Montgomery County in Virginia, Miami, Seattle and surrounding metropolitan regions. The results are presented for the first four weeks. Depending on the synthetic network, the peak time could be predicted within a 95% CI as early as seven weeks before the actual peak. The peak infected and total infected were also accurately forecasted for Montgomery County in Virginia within the forecasting period. Forecasting of the epidemic curve for both seasonal and pandemic influenza outbreaks is a complex problem, however this is a preliminary step and the results suggest that more can be achieved in this area. PMID:23826222
A Simulation Optimization Approach to Epidemic Forecasting.
Nsoesie, Elaine O; Beckman, Richard J; Shashaani, Sara; Nagaraj, Kalyani S; Marathe, Madhav V
2013-01-01
Reliable forecasts of influenza can aid in the control of both seasonal and pandemic outbreaks. We introduce a simulation optimization (SIMOP) approach for forecasting the influenza epidemic curve. This study represents the final step of a project aimed at using a combination of simulation, classification, statistical and optimization techniques to forecast the epidemic curve and infer underlying model parameters during an influenza outbreak. The SIMOP procedure combines an individual-based model and the Nelder-Mead simplex optimization method. The method is used to forecast epidemics simulated over synthetic social networks representing Montgomery County in Virginia, Miami, Seattle and surrounding metropolitan regions. The results are presented for the first four weeks. Depending on the synthetic network, the peak time could be predicted within a 95% CI as early as seven weeks before the actual peak. The peak infected and total infected were also accurately forecasted for Montgomery County in Virginia within the forecasting period. Forecasting of the epidemic curve for both seasonal and pandemic influenza outbreaks is a complex problem, however this is a preliminary step and the results suggest that more can be achieved in this area.
Reliability of the Achilles tendon tap reflex evoked during stance using a pendulum hammer.
Mildren, Robyn L; Zaback, Martin; Adkin, Allan L; Frank, James S; Bent, Leah R
2016-01-01
The tendon tap reflex (T-reflex) is often evoked in relaxed muscles to assess spinal reflex circuitry. Factors contributing to reflex excitability are modulated to accommodate specific postural demands. Thus, there is a need to be able to assess this reflex in a state where spinal reflex circuitry is engaged in maintaining posture. The aim of this study was to determine whether a pendulum hammer could provide controlled stimuli to the Achilles tendon and evoke reliable muscle responses during normal stance. A second aim was to establish appropriate stimulus parameters for experimental use. Fifteen healthy young adults stood on a forceplate while taps were applied to the Achilles tendon under conditions in which postural sway was constrained (by providing centre of pressure feedback) or unconstrained (no feedback) from an invariant release angle (50°). Twelve participants repeated this testing approximately six months later. Within one experimental session, tap force and T-reflex amplitude were found to be reliable regardless of whether postural sway was constrained (tap force ICC=0.982; T-reflex ICC=0.979) or unconstrained (tap force ICC=0.968; T-reflex ICC=0.964). T-reflex amplitude was also reliable between experimental sessions (constrained ICC=0.894; unconstrained ICC=0.890). When a T-reflex recruitment curve was constructed, optimal mid-range responses were observed using a 50° release angle. These results demonstrate that reliable Achilles T-reflexes can be evoked in standing participants without the need to constrain posture. The pendulum hammer provides a simple method to allow researchers and clinicians to gather information about reflex circuitry in a state where it is involved in postural control. Copyright © 2015 Elsevier B.V. All rights reserved.
Battery management systems (BMS) optimization for electric vehicles (EVs) in Malaysia
NASA Astrophysics Data System (ADS)
Salehen, P. M. W.; Su'ait, M. S.; Razali, H.; Sopian, K.
2017-04-01
Following the UN Climate Change Conference 2009 in Copenhagen, Denmark, Malaysia seriously committed on "Go Green" campaign with the aim to reduce 40% GHG emission by the year 2020. Therefore, the National Green Technology Policy has been legalised in 2009 with transportation as one of its focused sectors, which include hybrid (HEVs), electric vehicles (EVs) and fuel cell vehicles with the purpose of to keep up with the worst scenario. While the number of registered cars has been increasing by 1 million yearly, the amount has doubled in the last two decades. Consequently, CO2 emission in Malaysia reaches up to 97.1% and will continue to increase mainly due to the activities in the transportation sector. Nevertheless, Malaysia is now moving towards on green car which battery-based EVs. This type of transportation mainly needs power performance optimization, which is controlled by the Batteries Management System (BMS). BMS is an essential module which leads to reliable power management, optimal power performance and safe vehicle that lead back for power optimization in EVs. Thus, this paper proposes power performance optimization for various setups of lithium-ion cathode with graphene anode using MATLAB/SIMULINK software for better management performance and extended EVs driving range.
Parker, Maximilian G; Tyson, Sarah F; Weightman, Andrew P; Abbott, Bruce; Emsley, Richard; Mansell, Warren
2017-11-01
Computational models that simulate individuals' movements in pursuit-tracking tasks have been used to elucidate mechanisms of human motor control. Whilst there is evidence that individuals demonstrate idiosyncratic control-tracking strategies, it remains unclear whether models can be sensitive to these idiosyncrasies. Perceptual control theory (PCT) provides a unique model architecture with an internally set reference value parameter, and can be optimized to fit an individual's tracking behavior. The current study investigated whether PCT models could show temporal stability and individual specificity over time. Twenty adults completed three blocks of 15 1-min, pursuit-tracking trials. Two blocks (training and post-training) were completed in one session and the third was completed after 1 week (follow-up). The target moved in a one-dimensional, pseudorandom pattern. PCT models were optimized to the training data using a least-mean-squares algorithm, and validated with data from post-training and follow-up. We found significant inter-individual variability (partial η 2 : .464-.697) and intra-individual consistency (Cronbach's α: .880-.976) in parameter estimates. Polynomial regression revealed that all model parameters, including the reference value parameter, contribute to simulation accuracy. Participants' tracking performances were significantly more accurately simulated by models developed from their own tracking data than by models developed from other participants' data. We conclude that PCT models can be optimized to simulate the performance of an individual and that the test-retest reliability of individual models is a necessary criterion for evaluating computational models of human performance.
NASA Astrophysics Data System (ADS)
Li, Haichen; Qin, Tao; Wang, Weiping; Lei, Xiaohui; Wu, Wenhui
2018-02-01
Due to the weakness in holding diversity and reaching global optimum, the standard particle swarm optimization has not performed well in reservoir optimal operation. To solve this problem, this paper introduces downhill simplex method to work together with the standard particle swarm optimization. The application of this approach in Goupitan reservoir optimal operation proves that the improved method had better accuracy and higher reliability with small investment.
Choosing the optimal wind turbine variant using the ”ELECTRE” method
NASA Astrophysics Data System (ADS)
Ţişcă, I. A.; Anuşca, D.; Dumitrescu, C. D.
2017-08-01
This paper presents a method of choosing the “optimal” alternative, both under certainty and under uncertainty, based on relevant analysis criteria. Taking into account that a product can be assimilated to a system and that the reliability of the system depends on the reliability of its components, the choice of product (the appropriate system decision) can be done using the “ELECTRE” method and depending on the level of reliability of each product. In the paper, the “ELECTRE” method is used in choosing the optimal version of a wind turbine required to equip a wind farm in western Romania. The problems to be solved are related to the current situation of wind turbines that involves reliability problems. A set of criteria has been proposed to compare two or more products from a range of available products: Operating conditions, Environmental conditions during operation, Time requirements. Using the ELECTRE hierarchical mathematical method it was established that on the basis of the obtained coefficients of concordance the optimal variant of the wind turbine and the order of preference of the variants are determined, the values chosen as limits being arbitrary.
Guo, Wenzhong; Hong, Wei; Zhang, Bin; Chen, Yuzhong; Xiong, Naixue
2014-01-01
Mobile security is one of the most fundamental problems in Wireless Sensor Networks (WSNs). The data transmission path will be compromised for some disabled nodes. To construct a secure and reliable network, designing an adaptive route strategy which optimizes energy consumption and network lifetime of the aggregation cost is of great importance. In this paper, we address the reliable data aggregation route problem for WSNs. Firstly, to ensure nodes work properly, we propose a data aggregation route algorithm which improves the energy efficiency in the WSN. The construction process achieved through discrete particle swarm optimization (DPSO) saves node energy costs. Then, to balance the network load and establish a reliable network, an adaptive route algorithm with the minimal energy and the maximum lifetime is proposed. Since it is a non-linear constrained multi-objective optimization problem, in this paper we propose a DPSO with the multi-objective fitness function combined with the phenotype sharing function and penalty function to find available routes. Experimental results show that compared with other tree routing algorithms our algorithm can effectively reduce energy consumption and trade off energy consumption and network lifetime. PMID:25215944
Adaptive model-predictive controller for magnetic resonance guided focused ultrasound therapy.
de Bever, Joshua; Todd, Nick; Payne, Allison; Christensen, Douglas A; Roemer, Robert B
2014-11-01
Minimising treatment time and protecting healthy tissues are conflicting goals that play major roles in making magnetic resonance image-guided focused ultrasound (MRgFUS) therapies clinically practical. We have developed and tested in vivo an adaptive model-predictive controller (AMPC) that reduces treatment time, ensures safety and efficacy, and provides flexibility in treatment set-up. The controller realises time savings by modelling the heated treatment cell's future temperatures and thermal dose accumulation in order to anticipate the optimal time to switch to the next cell. Selected tissues are safeguarded by a configurable temperature constraint. Simulations quantified the time savings realised by each controller feature as well as the trade-offs between competing safety and treatment time parameters. In vivo experiments in rabbit thighs established the controller's effectiveness and reliability. In all in vivo experiments the target thermal dose of at least 240 CEM43 was delivered everywhere in the treatment volume. The controller's temperature safety limit reliably activated and constrained all protected tissues to <9 CEM43. Simulations demonstrated the path independence of the controller, and that a path which successively proceeds to the hottest untreated neighbouring cell leads to significant time savings, e.g. when compared to a concentric spiral path. Use of the AMPC produced a compounding time-saving effect; reducing the treatment cells' heating times concurrently reduced heating of normal tissues, which eliminated cooling periods. Adaptive model-predictive control can automatically deliver safe, effective MRgFUS treatments while significantly reducing treatment times.
Control and design of multiple unmanned air vehicles for persistent surveillance
NASA Astrophysics Data System (ADS)
Nigam, Nikhil
Control of multiple autonomous aircraft for search and exploration, is a topic of current research interest for applications such as weather monitoring, geographical surveys, search and rescue, tactical reconnaissance, and extra-terrestrial exploration, and the need to distribute sensing is driven by considerations of efficiency, reliability, cost and scalability. Hence, this problem has been extensively studied in the fields of controls and artificial intelligence. The task of persistent surveillance is different from a coverage/exploration problem, in that all areas need to be continuously searched, minimizing the time between visitations to each region in the target space. This distinction does not allow a straightforward application of most exploration techniques to the problem, although ideas from these methods can still be used. The use of aerial vehicles is motivated by their ability to cover larger spaces and their relative insensitivity to terrain. However, the dynamics of Unmanned Air Vehicles (UAVs) adds complexity to the control problem. Most of the work in the literature decouples the vehicle dynamics and control policies, but their interaction is particularly interesting for a surveillance mission. Stochastic environments and UAV failures further enrich the problem by requiring the control policies to be robust, and this aspect is particularly important for hardware implementations. For a persistent mission, it becomes imperative to consider the range/endurance constraints of the vehicles. The coupling of the control policy with the endurance constraints of the vehicles is an aspect that has not been sufficiently explored. Design of UAVs for desirable mission performance is also an issue of considerable significance. The use of a single monolithic optimization for such a problem has practical limitations, and decomposition-based design is a potential alternative. In this research high-level control policies are devised, that are scalable, reliable, efficient, and robust to changes in the environment. Most of the existing techniques that carry performance guarantees are not scalable or robust to changes. The scalable techniques are often heuristic in nature, resulting in lack of reliability and performance. Our policies are tested in a multi-UAV simulation environment developed for this problem, and shown to be near-optimal in spite of being completely reactive in nature. We explicitly account for the coupling between aircraft dynamics and control policies as well, and suggest modifications to improve performance under dynamic constraints. A smart refueling policy is also developed to account for limited endurance, and large performance benefits are observed. The method is based on the solution of a linear program that can be efficiently solved online in a distributed setting, unlike previous work. The Vehicle Swarm Technology Laboratory (VSTL), a hardware testbed developed at Boeing Research and Technology for evaluating swarm of UAVs, is described next and used to test the control strategy in a real-world scenario. The simplicity and robustness of the strategy allows easy implementation and near replication of the performance observed in simulation. Finally, an architecture for system-of-systems design based on Collaborative Optimization (CO) is presented. Earlier work coupling operations and design has used frameworks that make certain assumptions not valid for this problem. The efficacy of our approach is illustrated through preliminary design results, and extension to more realistic settings is also demonstrated.
Reducing maintenance costs in agreement with CNC machine tools reliability
NASA Astrophysics Data System (ADS)
Ungureanu, A. L.; Stan, G.; Butunoi, P. A.
2016-08-01
Aligning maintenance strategy with reliability is a challenge due to the need to find an optimal balance between them. Because the various methods described in the relevant literature involve laborious calculations or use of software that can be costly, this paper proposes a method that is easier to implement on CNC machine tools. The new method, called the Consequence of Failure Analysis (CFA) is based on technical and economic optimization, aimed at obtaining a level of required performance with minimum investment and maintenance costs.
Criticism of generally accepted fundamentals and methodologies of traffic and transportation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerner, Boris S.
It is explained why the set of the fundamental empirical features of traffic breakdown (a transition from free flow to congested traffic) should be the empirical basis for any traffic and transportation theory that can be reliable used for control and optimization in traffic networks. It is shown that generally accepted fundamentals and methodologies of traffic and transportation theory are not consistent with the set of the fundamental empirical features of traffic breakdown at a highway bottleneck. To these fundamentals and methodologies of traffic and transportation theory belong (i) Lighthill-Whitham-Richards (LWR) theory, (ii) the General Motors (GM) model class (formore » example, Herman, Gazis et al. GM model, Gipps’s model, Payne’s model, Newell’s optimal velocity (OV) model, Wiedemann’s model, Bando et al. OV model, Treiber’s IDM, Krauß’s model), (iii) the understanding of highway capacity as a particular stochastic value, and (iv) principles for traffic and transportation network optimization and control (for example, Wardrop’s user equilibrium (UE) and system optimum (SO) principles). Alternatively to these generally accepted fundamentals and methodologies of traffic and transportation theory, we discuss three-phase traffic theory as the basis for traffic flow modeling as well as briefly consider the network breakdown minimization (BM) principle for the optimization of traffic and transportation networks with road bottlenecks.« less
Chakrabortty, S; Sen, M; Pal, P
2014-03-01
A simulation software (ARRPA) has been developed in Microsoft Visual Basic platform for optimization and control of a novel membrane-integrated arsenic separation plant in the backdrop of absence of such software. The user-friendly, menu-driven software is based on a dynamic linearized mathematical model, developed for the hybrid treatment scheme. The model captures the chemical kinetics in the pre-treating chemical reactor and the separation and transport phenomena involved in nanofiltration. The software has been validated through extensive experimental investigations. The agreement between the outputs from computer simulation program and the experimental findings are excellent and consistent under varying operating conditions reflecting high degree of accuracy and reliability of the software. High values of the overall correlation coefficient (R (2) = 0.989) and Willmott d-index (0.989) are indicators of the capability of the software in analyzing performance of the plant. The software permits pre-analysis, manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. Performance analysis of the whole system as well as the individual units is possible using the tool. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for removal of arsenic from contaminated groundwater.
Li, Chunyan; Wu, Pei-Ming; Wu, Zhizhen; Ahn, Chong H; LeDoux, David; Shutter, Lori A; Hartings, Jed A; Narayan, Raj K
2012-02-01
The injured brain is vulnerable to increases in temperature after severe head injury. Therefore, accurate and reliable measurement of brain temperature is important to optimize patient outcome. In this work, we have fabricated, optimized and characterized temperature sensors for use with a micromachined smart catheter for multimodal intracranial monitoring. Developed temperature sensors have resistance of 100.79 ± 1.19Ω and sensitivity of 67.95 mV/°C in the operating range from15-50°C, and time constant of 180 ms. Under the optimized excitation current of 500 μA, adequate signal-to-noise ratio was achieved without causing self-heating, and changes in immersion depth did not introduce clinically significant errors of measurements (<0.01°C). We evaluated the accuracy and long-term drift (5 days) of twenty temperature sensors in comparison to two types of commercial temperature probes (USB Reference Thermometer, NIST-traceable bulk probe with 0.05°C accuracy; and IT-21, type T type clinical microprobe with guaranteed 0.1°C accuracy) under controlled laboratory conditions. These in vitro experimental data showed that the temperature measurement performance of our sensors was accurate and reliable over the course of 5 days. The smart catheter temperature sensors provided accuracy and long-term stability comparable to those of commercial tissue-implantable microprobes, and therefore provide a means for temperature measurement in a microfabricated, multimodal cerebral monitoring device.
Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan
2017-01-01
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325
Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan
2017-08-04
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.
Energy management and cooperation in microgrids
NASA Astrophysics Data System (ADS)
Rahbar, Katayoun
Microgrids are key components of future smart power grids, which integrate distributed renewable energy generators to efficiently serve the load demand locally. However, random and intermittent characteristics of renewable energy generations may hinder the reliable operation of microgrids. This thesis is thus devoted to investigating new strategies for microgrids to optimally manage their energy consumption, energy storage system (ESS) and cooperation in real time to achieve the reliable and cost-effective operation. This thesis starts with a single microgrid system. The optimal energy scheduling and ESS management policy is derived to minimize the energy cost of the microgrid resulting from drawing conventional energy from the main grid under both the off-line and online setups, where the renewable energy generation/load demand are assumed to be non-causally known and causally known at the microgrid, respectively. The proposed online algorithm is designed based on the optimal off-line solution and works under arbitrary (even unknown) realizations of future renewable energy generation/load demand. Therefore, it is more practically applicable as compared to solutions based on conventional techniques such as dynamic programming and stochastic programming that require the prior knowledge of renewable energy generation and load demand realizations/distributions. Next, for a group of microgrids that cooperate in energy management, we study efficient methods for sharing energy among them for both fully and partially cooperative scenarios, where microgrids are of common interests and self-interested, respectively. For the fully cooperative energy management, the off-line optimization problem is first formulated and optimally solved, where a distributed algorithm is proposed to minimize the total (sum) energy cost of microgrids. Inspired by the results obtained from the off-line optimization, efficient online algorithms are proposed for the real-time energy management, which are of low complexity and work given arbitrary realizations of renewable energy generation/load demand. On the other hand, for self-interested microgrids, the partially cooperative energy management is formulated and a distributed algorithm is proposed to optimize the energy cooperation such that energy costs of individual microgrids reduce simultaneously over the case without energy cooperation while limited information is shared among the microgrids and the central controller.
Microgrid optimal scheduling considering impact of high penetration wind generation
NASA Astrophysics Data System (ADS)
Alanazi, Abdulaziz
The objective of this thesis is to study the impact of high penetration wind energy in economic and reliable operation of microgrids. Wind power is variable, i.e., constantly changing, and nondispatchable, i.e., cannot be controlled by the microgrid controller. Thus an accurate forecasting of wind power is an essential task in order to study its impacts in microgrid operation. Two commonly used forecasting methods including Autoregressive Integrated Moving Average (ARIMA) and Artificial Neural Network (ANN) have been used in this thesis to improve the wind power forecasting. The forecasting error is calculated using a Mean Absolute Percentage Error (MAPE) and is improved using the ANN. The wind forecast is further used in the microgrid optimal scheduling problem. The microgrid optimal scheduling is performed by developing a viable model for security-constrained unit commitment (SCUC) based on mixed-integer linear programing (MILP) method. The proposed SCUC is solved for various wind penetration levels and the relationship between the total cost and the wind power penetration is found. In order to reduce microgrid power transfer fluctuations, an additional constraint is proposed and added to the SCUC formulation. The new constraint would control the time-based fluctuations. The impact of the constraint on microgrid SCUC results is tested and validated with numerical analysis. Finally, the applicability of proposed models is demonstrated through numerical simulations.
Decision-theoretic methodology for reliability and risk allocation in nuclear power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, N.Z.; Papazoglou, I.A.; Bari, R.A.
1985-01-01
This paper describes a methodology for allocating reliability and risk to various reactor systems, subsystems, components, operations, and structures in a consistent manner, based on a set of global safety criteria which are not rigid. The problem is formulated as a multiattribute decision analysis paradigm; the multiobjective optimization, which is performed on a PRA model and reliability cost functions, serves as the guiding principle for reliability and risk allocation. The concept of noninferiority is used in the multiobjective optimization problem. Finding the noninferior solution set is the main theme of the current approach. The assessment of the decision maker's preferencesmore » could then be performed more easily on the noninferior solution set. Some results of the methodology applications to a nontrivial risk model are provided and several outstanding issues such as generic allocation and preference assessment are discussed.« less
NASA Astrophysics Data System (ADS)
Li, W. W.; Du, Z. Z.; Yuan, R. m.; Xiong, D. Z.; Shi, E. W.; Lu, G. N.; Dai, Z. Y.; Chen, X. Q.; Jiang, Z. Y.; Lv, Y. G.
2017-10-01
Smart meter represents the development direction of energy-saving smart grid in the future. The load switch, one of the core parts of smart meter, should be of high reliability, safety and endurance capability of limit short-circuit current. For this reason, this paper discusses the quick simulation of relationship between attraction and counterforce of load switch without iteration, establishes dual response surface model of attraction and counterforce and optimizes the design scheme of load switch for charge control smart meter, thus increasing electromagnetic attraction and spring counterforce. In this way, this paper puts forward a method to improve the withstand capacity of limit short-circuit current.
Depression screening optimization in an academic rural setting.
Aleem, Sohaib; Torrey, William C; Duncan, Mathew S; Hort, Shoshana J; Mecchella, John N
2015-01-01
Primary care plays a critical role in screening and management of depression. The purpose of this paper is to focus on leveraging the electronic health record (EHR) as well as work flow redesign to improve the efficiency and reliability of the process of depression screening in two adult primary care clinics of a rural academic institution in USA. The authors utilized various process improvement tools from lean six sigma methodology including project charter, swim lane process maps, critical to quality tree, process control charts, fishbone diagrams, frequency impact matrix, mistake proofing and monitoring plan in Define-Measure-Analyze-Improve-Control format. Interventions included change in depression screening tool, optimization of data entry in EHR. EHR data entry optimization; follow up of positive screen, staff training and EHR redesign. Depression screening rate for office-based primary care visits improved from 17.0 percent at baseline to 75.9 percent in the post-intervention control phase (p<0.001). Follow up of positive depression screen with Patient History Questionnaire-9 data collection remained above 90 percent. Duplication of depression screening increased from 0.6 percent initially to 11.7 percent and then decreased to 4.7 percent after optimization of data entry by patients and flow staff. Impact of interventions on clinical outcomes could not be evaluated. Successful implementation, sustainability and revision of a process improvement initiative to facilitate screening, follow up and management of depression in primary care requires accounting for voice of the process (performance metrics), system limitations and voice of the customer (staff and patients) to overcome various system, customer and human resource constraints.
Jin, Huaiping; Chen, Xiangguang; Yang, Jianwen; Wu, Lei; Wang, Li
2014-11-01
The lack of accurate process models and reliable online sensors for substrate measurements poses significant challenges for controlling substrate feeding accurately, automatically and optimally in fed-batch fermentation industries. It is still a common practice to regulate the feeding rate based upon manual operations. To address this issue, a hybrid intelligent control method is proposed to enable automatic substrate feeding. The resulting control system consists of three modules: a presetting module for providing initial set-points; a predictive module for estimating substrate concentration online based on a new time interval-varying soft sensing algorithm; and a feedback compensator using expert rules. The effectiveness of the proposed approach is demonstrated through its successful applications to the industrial fed-batch chlortetracycline fermentation process. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Reinert, K. A.
The use of linear decision rules (LDR) and chance constrained programming (CCP) to optimize the performance of wind energy conversion clusters coupled to storage systems is described. Storage is modelled by LDR and output by CCP. The linear allocation rule and linear release rule prescribe the size and optimize a storage facility with a bypass. Chance constraints are introduced to explicitly treat reliability in terms of an appropriate value from an inverse cumulative distribution function. Details of deterministic programming structure and a sample problem involving a 500 kW and a 1.5 MW WECS are provided, considering an installed cost of $1/kW. Four demand patterns and three levels of reliability are analyzed for optimizing the generator choice and the storage configuration for base load and peak operating conditions. Deficiencies in ability to predict reliability and to account for serial correlations are noted in the model, which is concluded useful for narrowing WECS design options.
The optimization on flow scheme of helium liquefier with genetic algorithm
NASA Astrophysics Data System (ADS)
Wang, H. R.; Xiong, L. Y.; Peng, N.; Liu, L. Q.
2017-01-01
There are several ways to organize the flow scheme of the helium liquefiers, such as arranging the expanders in parallel (reverse Brayton stage) or in series (modified Brayton stages). In this paper, the inlet mass flow and temperatures of expanders in Collins cycle are optimized using genetic algorithm (GA). Results show that maximum liquefaction rate can be obtained when the system is working at the optimal parameters. However, the reliability of the system is not well due to high wheel speed of the first turbine. Study shows that the scheme in which expanders are arranged in series with heat exchangers between them has higher operation reliability but lower plant efficiency when working at the same situation. Considering both liquefaction rate and system stability, another flow scheme is put forward hoping to solve the dilemma. The three configurations are compared from different aspects, they are respectively economic cost, heat exchanger size, system reliability and exergy efficiency. In addition, the effect of heat capacity ratio on heat transfer efficiency is discussed. A conclusion of choosing liquefier configuration is given in the end, which is meaningful for the optimal design of helium liquefier.
Partial Storage Optimization and Load Control Strategy of Cloud Data Centers
2015-01-01
We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444
Partial storage optimization and load control strategy of cloud data centers.
Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela
2015-01-01
We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.
A real-time path rating calculation tool powered by HPC
DOE Office of Scientific and Technical Information (OSTI.GOV)
If transmission path ratings are determined in real time and optimized control methods can be implemented, congestion problems can be more effectively managed using the existing transmission assets, reducing congestion costs, avoiding capital expenditures for new physical assets, increasing revenues from the existing system, and maintaining reliability. In just one illustrative case, a BPA study has shown that a 1000-MW rating increase for a transmission path generates $15M in annual revenue, even if only 25% of the increased margin can be tapped for just 25% of the year.
NASA Technical Reports Server (NTRS)
Title, A. M.; Gillespie, B. A.; Mosher, J. W.
1982-01-01
A compact magnetograph system based on solid Fabry-Perot interferometers as the spectral isolation elements was studied. The theory of operation of several Fabry-Perot systems, the suitability of various magnetic lines, signal levels expected for different modes of operation, and the optimal detector systems were investigated. The requirements that the lack of a polarization modulator placed upon the electronic signal chain was emphasized. The PLZT modulator was chosen as a satisfactory component with both high reliability and elatively low voltage requirements. Thermal control, line centering and velocity offset problems were solved by a Fabry-Perot configuration.
NASA Astrophysics Data System (ADS)
Lu, Mengqian; Lall, Upmanu; Robertson, Andrew W.; Cook, Edward
2017-03-01
Streamflow forecasts at multiple time scales provide a new opportunity for reservoir management to address competing objectives. Market instruments such as forward contracts with specified reliability are considered as a tool that may help address the perceived risk associated with the use of such forecasts in lieu of traditional operation and allocation strategies. A water allocation process that enables multiple contracts for water supply and hydropower production with different durations, while maintaining a prescribed level of flood risk reduction, is presented. The allocation process is supported by an optimization model that considers multitime scale ensemble forecasts of monthly streamflow and flood volume over the upcoming season and year, the desired reliability and pricing of proposed contracts for hydropower and water supply. It solves for the size of contracts at each reliability level that can be allocated for each future period, while meeting target end of period reservoir storage with a prescribed reliability. The contracts may be insurable, given that their reliability is verified through retrospective modeling. The process can allow reservoir operators to overcome their concerns as to the appropriate skill of probabilistic forecasts, while providing water users with short-term and long-term guarantees as to how much water or energy they may be allocated. An application of the optimization model to the Bhakra Dam, India, provides an illustration of the process. The issues of forecast skill and contract performance are examined. A field engagement of the idea is useful to develop a real-world perspective and needs a suitable institutional environment.
Fixed Point Learning Based Intelligent Traffic Control System
NASA Astrophysics Data System (ADS)
Zongyao, Wang; Cong, Sui; Cheng, Shao
2017-10-01
Fixed point learning has become an important tool to analyse large scale distributed system such as urban traffic network. This paper presents a fixed point learning based intelligence traffic network control system. The system applies convergence property of fixed point theorem to optimize the traffic flow density. The intelligence traffic control system achieves maximum road resources usage by averaging traffic flow density among the traffic network. The intelligence traffic network control system is built based on decentralized structure and intelligence cooperation. No central control is needed to manage the system. The proposed system is simple, effective and feasible for practical use. The performance of the system is tested via theoretical proof and simulations. The results demonstrate that the system can effectively solve the traffic congestion problem and increase the vehicles average speed. It also proves that the system is flexible, reliable and feasible for practical use.
Application of Advanced Process Control techniques to a pusher type reheating furnace
NASA Astrophysics Data System (ADS)
Zanoli, S. M.; Pepe, C.; Barboni, L.
2015-11-01
In this paper an Advanced Process Control system aimed at controlling and optimizing a pusher type reheating furnace located in an Italian steel plant is proposed. The designed controller replaced the previous control system, based on PID controllers manually conducted by process operators. A two-layer Model Predictive Control architecture has been adopted that, exploiting a chemical, physical and economic modelling of the process, overcomes the limitations of plant operators’ mental model and knowledge. In addition, an ad hoc decoupling strategy has been implemented, allowing the selection of the manipulated variables to be used for the control of each single process variable. Finally, in order to improve the system flexibility and resilience, the controller has been equipped with a supervision module. A profitable trade-off between conflicting specifications, e.g. safety, quality and production constraints, energy saving and pollution impact, has been guaranteed. Simulation tests and real plant results demonstrated the soundness and the reliability of the proposed system.
Synchronic interval Gaussian mixed-integer programming for air quality management.
Cheng, Guanhui; Huang, Guohe Gordon; Dong, Cong
2015-12-15
To reveal the synchronism of interval uncertainties, the tradeoff between system optimality and security, the discreteness of facility-expansion options, the uncertainty of pollutant dispersion processes, and the seasonality of wind features in air quality management (AQM) systems, a synchronic interval Gaussian mixed-integer programming (SIGMIP) approach is proposed in this study. A robust interval Gaussian dispersion model is developed for approaching the pollutant dispersion process under interval uncertainties and seasonal variations. The reflection of synchronic effects of interval uncertainties in the programming objective is enabled through introducing interval functions. The proposition of constraint violation degrees helps quantify the tradeoff between system optimality and constraint violation under interval uncertainties. The overall optimality of system profits of an SIGMIP model is achieved based on the definition of an integrally optimal solution. Integer variables in the SIGMIP model are resolved by the existing cutting-plane method. Combining these efforts leads to an effective algorithm for the SIGMIP model. An application to an AQM problem in a region in Shandong Province, China, reveals that the proposed SIGMIP model can facilitate identifying the desired scheme for AQM. The enhancement of the robustness of optimization exercises may be helpful for increasing the reliability of suggested schemes for AQM under these complexities. The interrelated tradeoffs among control measures, emission sources, flow processes, receptors, influencing factors, and economic and environmental goals are effectively balanced. Interests of many stakeholders are reasonably coordinated. The harmony between economic development and air quality control is enabled. Results also indicate that the constraint violation degree is effective at reflecting the compromise relationship between constraint-violation risks and system optimality under interval uncertainties. This can help decision makers mitigate potential risks, e.g. insufficiency of pollutant treatment capabilities, exceedance of air quality standards, deficiency of pollution control fund, or imbalance of economic or environmental stress, in the process of guiding AQM. Copyright © 2015 Elsevier B.V. All rights reserved.
Thin-film module circuit design: Practical and reliability aspects
NASA Technical Reports Server (NTRS)
Daiello, R. V.; Twesme, E. N.
1985-01-01
This paper will address several aspects of the design and construction of submodules based on thin film amorphous silicon (a-Si) p i n solar cells. Starting from presently attainable single cell characteristics, and a realistic set of specifications, practical module designs are discussed from the viewpoints of efficient designs, the fabrication requirements, and reliability concerns. The examples center mostly on series interconnected modules of the superstrate type with detailed discussions of each portion of the structure in relation to its influence on module efficiency. Emphasis is placed on engineering topics such as: area coverage, optimal geometries, and cost and reliability. Practical constraints on achieving optimal designs, along with some examples of potential pitfalls in the manufacture and subsequent performance of a-Si modules are discussed.
Safe Direct Current Stimulator design for reduced power consumption and increased reliability.
Fridman, Gene
2017-07-01
Current state of the art neural prosthetics, such as cochlear implants, spinal cord stimulators, and deep brain stimulators use implantable pulse generators (IPGs) to excite neural activity. Inhibition of neural firing is typically indirect and requires excitation of neurons that then have inhibitory projections downstream. Safe Direct Current Stimulator (SDCS) technology is designed to convert electronic pulses delivered to electrodes embedded within an implantable device to ionic direct current (iDC) at the output of the device. iDC from the device can then control neural extracellular potential with the intent of being able to not only excite, but also inhibit and sensitize neurons, thereby greatly expanding the possible applications of neuromodulation therapies and neural interface mechanisms. While the potential applications and proof of concept of this device have been the focus of previous work, the published descriptions of this technology leave significant room for power and reliability optimization. We describe and model a novel device construction designed to reduce power consumption by a factor of 12 and to improve its reliability by a factor of 8.
On Applications of Disruption Tolerant Networking to Optical Networking in Space
NASA Technical Reports Server (NTRS)
Hylton, Alan Guy; Raible, Daniel E.; Juergens, Jeffrey; Iannicca, Dennis
2012-01-01
The integration of optical communication links into space networks via Disruption Tolerant Networking (DTN) is a largely unexplored area of research. Building on successful foundational work accomplished at JPL, we discuss a multi-hop multi-path network featuring optical links. The experimental test bed is constructed at the NASA Glenn Research Center featuring multiple Ethernet-to-fiber converters coupled with free space optical (FSO) communication channels. The test bed architecture models communication paths from deployed Mars assets to the deep space network (DSN) and finally to the mission operations center (MOC). Reliable versus unreliable communication methods are investigated and discussed; including reliable transport protocols, custody transfer, and fragmentation. Potential commercial applications may include an optical communications infrastructure deployment to support developing nations and remote areas, which are unburdened with supporting an existing heritage means of telecommunications. Narrow laser beam widths and control of polarization states offer inherent physical layer security benefits with optical communications over RF solutions. This paper explores whether or not DTN is appropriate for space-based optical networks, optimal payload sizes, reliability, and a discussion on security.
Sensor Selection and Optimization for Health Assessment of Aerospace Systems
NASA Technical Reports Server (NTRS)
Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy
2007-01-01
Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service. These research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, dependable sensor suite to address system health assessment requirements.
Sensor Selection and Optimization for Health Assessment of Aerospace Systems
NASA Technical Reports Server (NTRS)
Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy
2008-01-01
Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service these research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, defendable sensor suite to address system health assessment requirements.
Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs
NASA Astrophysics Data System (ADS)
Chitsazan, N.; Tsai, F. T.
2012-12-01
Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.
Yashar, Amit; Denison, Rachel N
2017-12-01
Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL's effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments.
Feature reliability determines specificity and transfer of perceptual learning in orientation search
2017-01-01
Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL’s effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments. PMID:29240813
Smart EV Energy Management System to Support Grid Services
NASA Astrophysics Data System (ADS)
Wang, Bin
Under smart grid scenarios, the advanced sensing and metering technologies have been applied to the legacy power grid to improve the system observability and the real-time situational awareness. Meanwhile, there is increasing amount of distributed energy resources (DERs), such as renewable generations, electric vehicles (EVs) and battery energy storage system (BESS), etc., being integrated into the power system. However, the integration of EVs, which can be modeled as controllable mobile energy devices, brings both challenges and opportunities to the grid planning and energy management, due to the intermittency of renewable generation, uncertainties of EV driver behaviors, etc. This dissertation aims to solve the real-time EV energy management problem in order to improve the overall grid efficiency, reliability and economics, using online and predictive optimization strategies. Most of the previous research on EV energy management strategies and algorithms are based on simplified models with unrealistic assumptions that the EV charging behaviors are perfectly known or following known distributions, such as the arriving time, leaving time and energy consumption values, etc. These approaches fail to obtain the optimal solutions in real-time because of the system uncertainties. Moreover, there is lack of data-driven strategy that performs online and predictive scheduling for EV charging behaviors under microgrid scenarios. Therefore, we develop an online predictive EV scheduling framework, considering uncertainties of renewable generation, building load and EV driver behaviors, etc., based on real-world data. A kernel-based estimator is developed to predict the charging session parameters in real-time with improved estimation accuracy. The efficacy of various optimization strategies that are supported by this framework, including valley-filling, cost reduction, event-based control, etc., has been demonstrated. In addition, the existing simulation-based approaches do not consider a variety of practical concerns of implementing such a smart EV energy management system, including the driver preferences, communication protocols, data models, and customized integration of existing standards to provide grid services. Therefore, this dissertation also solves these issues by designing and implementing a scalable system architecture to capture the user preferences, enable multi-layer communication and control, and finally improve the system reliability and interoperability.
The control system of the polarized internal target of ANKE at COSY
NASA Astrophysics Data System (ADS)
Kleines, H.; Sarkadi, J.; Zwoll, K.; Engels, R.; Grigoryev, K.; Mikirtychyants, M.; Nekipelov, M.; Rathmann, F.; Seyfarth, H.; Kravtsov, P.; Vasilyev, A.
2006-05-01
The polarized internal target for the ANKE experiment at the Cooler Synchrotron COSY of the Forschungszentrum Jülich utilizes a polarized atomic beam source to feed a storage cell with polarized hydrogen or deuterium atoms. The nuclear polarization is measured with a Lamb-shift polarimeter. For common control of the two systems, industrial equipment was selected providing reliable, long-term support and remote control of the target as well as measurement and optimization of its operating parameters. The interlock system has been implemented on the basis of SIEMENS SIMATIC S7-300 family of programmable logic controllers. In order to unify the interfacing to the control computer, all front-end equipment is connected via the PROFIBUS DP fieldbus. The process control software was implemented using the Windows-based WinCC toolkit from SIEMENS. The variety of components, to be controlled, and the logical structure of the control and interlock system are described. Finally, a number of applications derived from the present development to other, new installations are briefly mentioned.
Design of a modular digital computer system
NASA Technical Reports Server (NTRS)
1973-01-01
A design tradeoff study is reported for a modular spaceborne computer system that is responsive to many mission types and phases. The computer uses redundancy to maximize reliability, and multiprocessing to maximize processing capacity. Fault detection and recovery features provide optimal reliability.
De Filippis, Luigi Alberto Ciro; Serio, Livia Maria; Facchini, Francesco; Mummolo, Giovanni; Ludovico, Antonio Domenico
2016-11-10
A simulation model was developed for the monitoring, controlling and optimization of the Friction Stir Welding (FSW) process. This approach, using the FSW technique, allows identifying the correlation between the process parameters (input variable) and the mechanical properties (output responses) of the welded AA5754 H111 aluminum plates. The optimization of technological parameters is a basic requirement for increasing the seam quality, since it promotes a stable and defect-free process. Both the tool rotation and the travel speed, the position of the samples extracted from the weld bead and the thermal data, detected with thermographic techniques for on-line control of the joints, were varied to build the experimental plans. The quality of joints was evaluated through destructive and non-destructive tests (visual tests, macro graphic analysis, tensile tests, indentation Vickers hardness tests and t thermographic controls). The simulation model was based on the adoption of the Artificial Neural Networks (ANNs) characterized by back-propagation learning algorithm with different types of architecture, which were able to predict with good reliability the FSW process parameters for the welding of the AA5754 H111 aluminum plates in Butt-Joint configuration.
Low-Thrust Transfers from Distant Retrograde Orbits to L2 Halo Orbits in the Earth-Moon System
NASA Technical Reports Server (NTRS)
Parrish, Nathan L.; Parker, Jeffrey S.; Hughes, Steven P.; Heiligers, Jeannette
2016-01-01
This paper presents a study of transfers between distant retrograde orbits (DROs) and L2 halo orbits in the Earth-Moon system that could be flown by a spacecraft with solar electric propulsion (SEP). Two collocation-based optimal control methods are used to optimize these highly-nonlinear transfers: Legendre pseudospectral and Hermite-Simpson. Transfers between DROs and halo orbits using low-thrust propulsion have not been studied previously. This paper offers a study of several families of trajectories, parameterized by the number of orbital revolutions in a synodic frame. Even with a poor initial guess, a method is described to reliably generate families of solutions. The circular restricted 3-body problem (CRTBP) is used throughout the paper so that the results are autonomous and simpler to understand.
Need for Cost Optimization of Space Life Support Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Anderson, Grant
2017-01-01
As the nation plans manned missions that go far beyond Earth orbit to Mars, there is an urgent need for a robust, disciplined systems engineering methodology that can identify an optimized Environmental Control and Life Support (ECLSS) architecture for long duration deep space missions. But unlike the previously used Equivalent System Mass (ESM), the method must be inclusive of all driving parameters and emphasize the economic analysis of life support system design. The key parameter for this analysis is Life Cycle Cost (LCC). LCC takes into account the cost for development and qualification of the system, launch costs, operational costs, maintenance costs and all other relevant and associated costs. Additionally, an effective methodology must consider system technical performance, safety, reliability, maintainability, crew time, and other factors that could affect the overall merit of the life support system.
Reliability-Based Design Optimization of a Composite Airframe Component
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.
2011-01-01
A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.
The operations of quantum logic gates with pure and mixed initial states.
Chen, Jun-Liang; Li, Che-Ming; Hwang, Chi-Chuan; Ho, Yi-Hui
2011-04-07
The implementations of quantum logic gates realized by the rovibrational states of a C(12)O(16) molecule in the X((1)Σ(+)) electronic ground state are investigated. Optimal laser fields are obtained by using the modified multitarget optimal theory (MTOCT) which combines the maxima of the cost functional and the fidelity for state and quantum process. The projection operator technique together with modified MTOCT is used to get optimal laser fields. If initial states of the quantum gate are pure states, states at target time approach well to ideal target states. However, if the initial states are mixed states, the target states do not approach well to ideal ones. The process fidelity is introduced to investigate the reliability of the quantum gate operation driven by the optimal laser field. We found that the quantum gates operate reliably whether the initial states are pure or mixed.
NASA Astrophysics Data System (ADS)
Li, Lun; Wei, Sixiao; Tian, Xin; Hsieh, Li-Tse; Chen, Zhijiang; Pham, Khanh; Lyke, James; Chen, Genshe
2018-05-01
In the current global positioning system (GPS), the reliability of information transmissions can be enhanced with the aid of inter-satellite links (ISLs) or crosslinks between satellites. Instead of only using conventional radio frequency (RF) crosslinks, the laser crosslinks provide an option to significantly increase the data throughput. The connectivity and robustness of ISL are needed for analysis, especially for GPS constellations with laser crosslinks. In this paper, we first propose a hybrid GPS communication architecture in which uplinks and downlinks are established via RF signals and crosslinks are established via laser links. Then, we design an optical crosslink assignment criteria considering the practical optical communication factors such as optical line- of-sight (LOS) range, link distance, and angular velocity, etc. After that, to further improve the rationality of establishing crosslinks, a topology control algorithm is formulated to optimize GPS crosslink networks at both physical and network layers. The RF transmission features for uplink and downlink and optical transmission features for crosslinks are taken into account as constraints for the optimization problem. Finally, the proposed link establishment criteria are implemented for GPS communication with optical crosslinks. The designs of this paper provide a potential crosslink establishment and topology control algorithm for the next generation GPS.
Plasma-assisted physical vapor deposition surface treatments for tribological control
NASA Technical Reports Server (NTRS)
Spalvins, Talivaldis
1990-01-01
In any mechanical or engineering system where contacting surfaces are in relative motion, adhesion, wear, and friction affect reliability and performance. With the advancement of space age transportation systems, the tribological requirements have dramatically increased. This is due to the optimized design, precision tolerance requirements, and high reliability expected for solid lubricating films in order to withstand hostile operating conditions (vacuum, high-low temperatures, high loads, and space radiation). For these problem areas the ion-assisted deposition/modification processes (plasma-based and ion beam techniques) offer the greatest potential for the synthesis of thin films and the tailoring of adherence and chemical and structural properties for optimized tribological performance. The present practices and new approaches of applying soft solid lubricant and hard wear resistant films to engineering substrates are reviewed. The ion bombardment treatments have increased film adherence, lowered friction coefficients, and enhanced wear life of the solid lubricating films such as the dichalcogenides (MoS2) and the soft metals (Au, Ag, Pb). Currently, sputtering is the preferred method of applying MoS2 films; and ion plating, the soft metallic films. Ultralow friction coefficients (less than 0.01) were achieved with sputtered MoS2. Further, new diamond-like carbon and BN lubricating films are being developed by using the ion assisted deposition techniques.
Estes, Matthew D; Yang, Jianing; Duane, Brett; Smith, Stan; Brooks, Carla; Nordquist, Alan; Zenhausern, Frederic
2012-12-07
This study reports the design, prototyping, and assay development of multiplexed polymerase chain reaction (PCR) on a plastic microfluidic device. Amplification of 17 DNA loci is carried out directly on-chip as part of a system for continuous workflow processing from sample preparation (SP) to capillary electrophoresis (CE). For enhanced performance of on-chip PCR amplification, improved control systems have been developed making use of customized Peltier assemblies, valve actuators, software, and amplification chemistry protocols. Multiple enhancements to the microfluidic chip design have been enacted to improve the reliability of sample delivery through the various on-chip modules. This work has been enabled by the encapsulation of PCR reagents into a solid phase material through an optimized Solid Phase Encapsulating Assay Mix (SPEAM) bead-based hydrogel fabrication process. SPEAM bead technology is reliably coupled with precise microfluidic metering and dispensing for efficient amplification and subsequent DNA short tandem repeat (STR) fragment analysis. This provides a means of on-chip reagent storage suitable for microfluidic automation, with the long shelf-life necessary for point-of-care (POC) or field deployable applications. This paper reports the first high quality 17-plex forensic STR amplification from a reference sample in a microfluidic chip with preloaded solid phase reagents, that is designed for integration with up and downstream processing.
Impens, Saartje; Chen, Yantian; Mullens, Steven; Luyten, Frank; Schrooten, Jan
2010-12-01
The repair of large and complex bone defects could be helped by a cell-based bone tissue engineering strategy. A reliable and consistent cell-seeding methodology is a mandatory step in bringing bone tissue engineering into the clinic. However, optimization of the cell-seeding step is only relevant when it can be reliably evaluated. The cell seeding efficiency (CSE) plays a fundamental role herein. Results showed that cell lysis and the definition used to determine the CSE played a key role in quantifying the CSE. The definition of CSE should therefore be consistent and unambiguous. The study of the influence of five drop-seeding-related parameters within the studied test conditions showed that (i) the cell density and (ii) the seeding vessel did not significantly affect the CSE, whereas (iii) the volume of seeding medium-to-free scaffold volume ratio (MFR), (iv) the seeding time, and (v) the scaffold morphology did. Prolonging the incubation time increased the CSE up to a plateau value at 4 h. Increasing the MFR or permeability by changing the morphology of the scaffolds significantly reduced the CSE. These results confirm that cell seeding optimization is needed and that an evidence-based selection of the seeding conditions is favored.
Stability, Nonlinearity and Reliability of Electrostatically Actuated MEMS Devices
Zhang, Wen-Ming; Meng, Guang; Chen, Di
2007-01-01
Electrostatic micro-electro-mechanical system (MEMS) is a special branch with a wide range of applications in sensing and actuating devices in MEMS. This paper provides a survey and analysis of the electrostatic force of importance in MEMS, its physical model, scaling effect, stability, nonlinearity and reliability in detail. It is necessary to understand the effects of electrostatic forces in MEMS and then many phenomena of practical importance, such as pull-in instability and the effects of effective stiffness, dielectric charging, stress gradient, temperature on the pull-in voltage, nonlinear dynamic effects and reliability due to electrostatic forces occurred in MEMS can be explained scientifically, and consequently the great potential of MEMS technology could be explored effectively and utilized optimally. A simplified parallel-plate capacitor model is proposed to investigate the resonance response, inherent nonlinearity, stiffness softened effect and coupled nonlinear effect of the typical electrostatically actuated MEMS devices. Many failure modes and mechanisms and various methods and techniques, including materials selection, reasonable design and extending the controllable travel range used to analyze and reduce the failures are discussed in the electrostatically actuated MEMS devices. Numerical simulations and discussions indicate that the effects of instability, nonlinear characteristics and reliability subjected to electrostatic forces cannot be ignored and are in need of further investigation.
Randomized controlled trials in dentistry: common pitfalls and how to avoid them.
Fleming, Padhraig S; Lynch, Christopher D; Pandis, Nikolaos
2014-08-01
Clinical trials are used to appraise the effectiveness of clinical interventions throughout medicine and dentistry. Randomized controlled trials (RCTs) are established as the optimal primary design and are published with increasing frequency within the biomedical sciences, including dentistry. This review outlines common pitfalls associated with the conduct of randomized controlled trials in dentistry. Common failings in RCT design leading to various types of bias including selection, performance, detection and attrition bias are discussed in this review. Moreover, methods of minimizing and eliminating bias are presented to ensure that maximal benefit is derived from RCTs within dentistry. Well-designed RCTs have both upstream and downstream uses acting as a template for development and populating systematic reviews to permit more precise estimates of treatment efficacy and effectiveness. However, there is increasing awareness of waste in clinical research, whereby resource-intensive studies fail to provide a commensurate level of scientific evidence. Waste may stem either from inappropriate design or from inadequate reporting of RCTs; the importance of robust conduct of RCTs within dentistry is clear. Optimal reporting of randomized controlled trials within dentistry is necessary to ensure that trials are reliable and valid. Common shortcomings leading to important forms or bias are discussed and approaches to minimizing these issues are outlined. Copyright © 2014 Elsevier Ltd. All rights reserved.
Cost-Effective and Ecofriendly Plug-In Hybrid Electric Vehicle Charging Management
Kontou, Eleftheria; Yin, Yafeng; Ge, Ying-en
2017-01-01
In this study we explore two charging management schemes for plug-in hybrid electric vehicles (PHEVs). The PHEV drivers and the government were stakeholders who might have preferred different charging control strategies. For the former, a proposed controlled charging scheme minimized the operational cost during PHEV charge-depleting and sustaining modes. For the latter, the research minimized monetized carbon dioxide emissions from electricity generation for the PHEVs charging, as well as tailpipe emissions for the portion of PHEV trips fueled by gasoline. Hourly driving patterns and electricity data were leveraged. Both were representative of each of the eight North American Electric Reliabilitymore » Corporation regions to examine the results of the proposed schemes. The model accounted for drivers' activity patterns and charging availability spatial and temporal heterogeneity. The optimal charging profiles confirmed the differing nature of the objectives of PHEV drivers and the government; cost-effective charge should occur early in the morning, while ecofriendly charge should be late in the afternoon. Each control's trade-offs between operation cost and emission savings are discussed for each North American Electric Reliability Corporation region. The availability of workplace and public charging was found to affect the optimal charging profiles greatly. Charging control is more efficient for drivers and government when PHEVs have greater electric range.« less
Cost-Effective and Ecofriendly Plug-In Hybrid Electric Vehicle Charging Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontou, Eleftheria; Yin, Yafeng; Ge, Ying-en
In this study we explore two charging management schemes for plug-in hybrid electric vehicles (PHEVs). The PHEV drivers and the government were stakeholders who might have preferred different charging control strategies. For the former, a proposed controlled charging scheme minimized the operational cost during PHEV charge-depleting and sustaining modes. For the latter, the research minimized monetized carbon dioxide emissions from electricity generation for the PHEVs charging, as well as tailpipe emissions for the portion of PHEV trips fueled by gasoline. Hourly driving patterns and electricity data were leveraged. Both were representative of each of the eight North American Electric Reliabilitymore » Corporation regions to examine the results of the proposed schemes. The model accounted for drivers' activity patterns and charging availability spatial and temporal heterogeneity. The optimal charging profiles confirmed the differing nature of the objectives of PHEV drivers and the government; cost-effective charge should occur early in the morning, while ecofriendly charge should be late in the afternoon. Each control's trade-offs between operation cost and emission savings are discussed for each North American Electric Reliability Corporation region. The availability of workplace and public charging was found to affect the optimal charging profiles greatly. Charging control is more efficient for drivers and government when PHEVs have greater electric range.« less
Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.
Vercillo, Tiziana; Gori, Monica
2015-01-01
The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano
Past works that focused on addressing power-quality and reliability concerns related to renewable energy resources (RESs) operating with business-as-usual practices have looked at the design of Volt/VAr and Volt/Watt strategies to regulate real or reactive powers based on local voltage measurements, so that terminal voltages are within acceptable levels. These control strategies have the potential of operating at the same time scale of distribution-system dynamics, and can therefore mitigate disturbances precipitated fast time-varying loads and ambient conditions; however, they do not necessarily guarantee system-level optimality, and stability claims are mainly based on empirical evidences. On a different time scale, centralizedmore » and distributed optimal power flow (OPF) algorithms have been proposed to compute optimal steady-state inverter setpoints, so that power losses and voltage deviations are minimized and economic benefits to end-users providing ancillary services are maximized. However, traditional OPF schemes may offer decision making capabilities that do not match the dynamics of distribution systems. Particularly, during the time required to collect data from all the nodes of the network (e.g., loads), solve the OPF, and subsequently dispatch setpoints, the underlying load, ambient, and network conditions may have already changed; in this case, the DER output powers would be consistently regulated around outdated setpoints, leading to suboptimal system operation and violation of relevant electrical limits. The present work focuses on the synthesis of distributed RES-inverter controllers that leverage the opportunities for fast feedback offered by power-electronics interfaced RESs. The overarching objective is to bridge the temporal gap between long-term system optimization and real-time control, to enable seamless RES integration in large scale with stability and efficiency guarantees, while congruently pursuing system-level optimization objectives. The design of the control framework is based on suitable linear approximations of the AC power-flow equations as well as Lagrangian regularization methods. The proposed controllers enable an update of the power outputs at a time scale that is compatible with the underlying dynamics of loads and ambient conditions, and continuously drive the system operation towards OPF-based solutions.« less
Gamma guidance of trajectories for coplanar, aeroassisted orbital transfer
NASA Technical Reports Server (NTRS)
Miele, A.; Wang, T.
1990-01-01
The optimization and guidance of trajectories for coplaner, aeroassisted orbital transfer (AOT) from high Earth orbit (HEO) to low Earth orbit (LEO) are examined. In particular, HEO can be a geosynchronous Earth orbit (GEO). It is assumed that the initial and final orbits are circular, that the gravitational field is central and is governed by the inverse square law, and that at most three impulses are employed: one at HEO exit, one at atmospheric exit, and one at LEO entry. It is also assumed that, during the atmospheric pass, the trajectory is controlled via the lift coefficient. The presence of upper and lower bounds on the lift coefficient is considered. First, optimal trajectories are computed by minimizing the total velocity impulse (hence, the propellant consumption) required for AOT transfer. The sequential gradient-restoration algorithm (SGRA) is used for optimal control problems. The optimal trajectory is shown to include two branches: a relatively short descending flight branch (branch 1) and a long ascending flight branch (branch 2). Next, attention is focused on guidance trajectories capable of approximating the optimal trajectories in real time, while retaining the essential characteristics of simplicity, ease of implementation, and reliability. For the atmospheric pass, a feedback control scheme is employed and the lift coefficient is adjusted according to a two-stage gamma guidance law. Further improvements are possible via a modified gamma guidance which is more stable with respect to dispersion effects arising from navigation errors, variations of the atmospheric density, and uncertainties in the aerodynamic coefficients than gamma guidance trajectory. A byproduct of the studies on dispersion effects is the following design concept. For coplaner aeroassisted orbital transfer, the lift-range-to-weight ratio appears to play a more important role than the lift-to-drag ratio. This is because the lift-range-to-weight ratio controls mainly the minimum altitude (hence, the peak heating rate) of the guidance trajectory; on the other hand, the lift-to-drag ratio controls mainly the duration of the atmospheric pass of the guidance trajectory.
Bladder welding in rats using controlled temperature CO2 laser system.
Lobik, L; Ravid, A; Nissenkorn, I; Kariv, N; Bernheim, J; Katzir, A
1999-05-01
Laser tissue welding has potential advantages over conventional suture closure of surgical wounds. It is a noncontact technique that introduces no foreign body and limits the possibility of infections and complications. The closure could be immediately watertight and the procedure may be less traumatic, faster and easier. In spite of these positives laser welding has not yet been approved for wide use. The problem in the clinical implementation of this technique arises from the difficulty in defining the conditions under which a highly reliable weld is formed. We have assumed that the successful welding of tissues depends on the ability to monitor and control the surface temperature during the procedure, thereby avoiding underheating or overheating. The purpose of this work was to develop a laser system for reliable welding of urinary tract tissues under good temperature control. We have developed a "smart" laser system that is capable of a dual role: transmitting CO2 laser power for tissue heating, and noncontact (radiometric) temperature monitoring and control. Bladder opening (cystotomy) was performed in 38 rats. Thirty-three animals underwent laser welding. In 5 rats (control group) the bladder wound was closed with one layer of continuous 6-0 dexon sutures. Reliable welding was obtained when the surface temperature was kept at 71 + 5C. Quality of weld was controlled immediately after operation. The rats were sacrificed on days 2, 10 and 30 for histological study. Bladder closure using the laser welding system was successful in 31/33 (94%) animals. Histological examination revealed an excellent welding and healing of the tissue. Efficiency of laser welding of urinary bladder in rats was confirmed by high survival rate and quality of scar that was demonstrated by clinical and histological examinations. In the future, optimal laser welding conditions will be studied in larger animals, using CO2 lasers and other lasers, with deeper radiation penetration into tissues.
Ochoa, Silvia; Yoo, Ahrim; Repke, Jens-Uwe; Wozny, Günter; Yang, Dae Ryook
2007-01-01
Despite many environmental advantages of using alcohol as a fuel, there are still serious questions about its economical feasibility when compared with oil-based fuels. The bioethanol industry needs to be more competitive, and therefore, all stages of its production process must be simple, inexpensive, efficient, and "easy" to control. In recent years, there have been significant improvements in process design, such as in the purification technologies for ethanol dehydration (molecular sieves, pressure swing adsorption, pervaporation, etc.) and in genetic modifications of microbial strains. However, a lot of research effort is still required in optimization and control, where the first step is the development of suitable models of the process, which can be used as a simulated plant, as a soft sensor or as part of the control algorithm. Thus, toward developing good, reliable, and simple but highly predictive models that can be used in the future for optimization and process control applications, in this paper an unstructured and a cybernetic model are proposed and compared for the simultaneous saccharification-fermentation process (SSF) for the production of ethanol from starch by a recombinant Saccharomyces cerevisiae strain. The cybernetic model proposed is a new one that considers the degradation of starch not only into glucose but also into dextrins (reducing sugars) and takes into account the intracellular reactions occurring inside the cells, giving a more detailed description of the process. Furthermore, an identification procedure based on the Metropolis Monte Carlo optimization method coupled with a sensitivity analysis is proposed for the identification of the model's parameters, employing experimental data reported in the literature.
Versatile control of Plasmodium falciparum gene expression with an inducible protein-RNA interaction
Goldfless, Stephen J.; Wagner, Jeffrey C.; Niles, Jacquin C.
2014-01-01
The available tools for conditional gene expression in Plasmodium falciparum are limited. Here, to enable reliable control of target gene expression, we build a system to efficiently modulate translation. We overcame several problems associated with other approaches for regulating gene expression in P. falciparum. Specifically, our system functions predictably across several native and engineered promoter contexts, and affords control over reporter and native parasite proteins irrespective of their subcellular compartmentalization. Induction and repression of gene expression are rapid, homogeneous, and stable over prolonged periods. To demonstrate practical application of our system, we used it to reveal direct links between antimalarial drugs and their native parasite molecular target. This is an important out come given the rapid spread of resistance, and intensified efforts to efficiently discover and optimize new antimalarial drugs. Overall, the studies presented highlight the utility of our system for broadly controlling gene expression and performing functional genetics in P. falciparum. PMID:25370483
Theory, simulation and experiments for precise deflection control of radiotherapy electron beams.
Figueroa, R; Leiva, J; Moncada, R; Rojas, L; Santibáñez, M; Valente, M; Velásquez, J; Young, H; Zelada, G; Yáñez, R; Guillen, Y
2018-03-08
Conventional radiotherapy is mainly applied by linear accelerators. Although linear accelerators provide dual (electron/photon) radiation beam modalities, both of them are intrinsically produced by a megavoltage electron current. Modern radiotherapy treatment techniques are based on suitable devices inserted or attached to conventional linear accelerators. Thus, precise control of delivered beam becomes a main key issue. This work presents an integral description of electron beam deflection control as required for novel radiotherapy technique based on convergent photon beam production. Theoretical and Monte Carlo approaches were initially used for designing and optimizing device´s components. Then, dedicated instrumentation was developed for experimental verification of electron beam deflection due to the designed magnets. Both Monte Carlo simulations and experimental results support the reliability of electrodynamics models used to predict megavoltage electron beam control. Copyright © 2018 Elsevier Ltd. All rights reserved.
OPTIMIZING EXPOSURE MEASUREMENT TECHNIQUES
The research reported in this task description addresses one of a series of interrelated NERL tasks with the common goal of optimizing the predictive power of low cost, reliable exposure measurements for the planned Interagency National Children's Study (NCS). Specifically, we w...
Wind farm optimization using evolutionary algorithms
NASA Astrophysics Data System (ADS)
Ituarte-Villarreal, Carlos M.
In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a variable number of system components and wind turbines with different operating characteristics and sizes, to have a more heterogeneous model that can deal with changes in the layout and in the power generation requirements over the time. Moreover, the approach evaluates the impact of the wind-wake effect of the wind turbines upon one another to describe and evaluate the power production capacity reduction of the system depending on the layout distribution of the wind turbines.
Baxter, Mikayla F. A.; Merino-Guzman, Ruben; Latorre, Juan D.; Mahaffey, Brittany D.; Yang, Yichao; Teague, Kyle D.; Graham, Lucas E.; Wolfenden, Amanda D.; Hernandez-Velasco, Xochitl; Bielke, Lisa R.; Hargis, Billy M.; Tellez, Guillermo
2017-01-01
Fluorescein isothiocyanate dextran (FITC-d) is a 3–5 kDa marker used to measure tight junction permeability. We have previously shown that intestinal barrier function can be adversely affected by stress, poorly digested diets, or feed restriction (FR), resulting in increased intestinal inflammation-associated permeability. However, further optimization adjustments of the current FITC-d methodology are possible to enhance precision and efficacy of results in future. The objective of the present study was to optimize our current model to obtain a larger difference between control and treated groups, by optimizing the FITC-d measurement as a biomarker in a 24-h FR model to induce gut permeability in broiler chickens. One in vitro and four in vivo independent experiments were conducted. The results of the present study suggest that by increasing the dose of FITC-d (8.32 versus 4.16 mg/kg); shortening the collection time of blood samples (1 versus 2.5 h); using a pool of non-FITC-d serum as a blank, compared to previously used PBS; adding a standard curve to set a limit of detection and modifying the software’s optimal sensitivity value, it was possible to obtain more consistent and reliable results. PMID:28470003
Uncertainty-Based Multi-Objective Optimization of Groundwater Remediation Design
NASA Astrophysics Data System (ADS)
Singh, A.; Minsker, B.
2003-12-01
Management of groundwater contamination is a cost-intensive undertaking filled with conflicting objectives and substantial uncertainty. A critical source of this uncertainty in groundwater remediation design problems comes from the hydraulic conductivity values for the aquifer, upon which the prediction of flow and transport of contaminants are dependent. For a remediation solution to be reliable in practice it is important that it is robust over the potential error in the model predictions. This work focuses on incorporating such uncertainty within a multi-objective optimization framework, to get reliable as well as Pareto optimal solutions. Previous research has shown that small amounts of sampling within a single-objective genetic algorithm can produce highly reliable solutions. However with multiple objectives the noise can interfere with the basic operations of a multi-objective solver, such as determining non-domination of individuals, diversity preservation, and elitism. This work proposes several approaches to improve the performance of noisy multi-objective solvers. These include a simple averaging approach, taking samples across the population (which we call extended averaging), and a stochastic optimization approach. All the approaches are tested on standard multi-objective benchmark problems and a hypothetical groundwater remediation case-study; the best-performing approach is then tested on a field-scale case at Umatilla Army Depot.
Multicriteria methodological approach to manage urban air pollution
NASA Astrophysics Data System (ADS)
Vlachokostas, Ch.; Achillas, Ch.; Moussiopoulos, N.; Banias, G.
2011-08-01
Managing urban air pollution necessitates a feasible and efficient abatement strategy which is characterised as a defined set of specific control measures. In practice, hard budget constraints are present in any decision-making process and therefore available alternatives need to be hierarchised in a fast but still reliable manner. Moreover, realistic strategies require adequate information on the available control measures, taking also into account the area's special characteristics. The selection of the most applicable bundle of measures rests in achieving stakeholders' consensus, while taking into consideration mutually conflicting views and criteria. A preliminary qualitative comparison of alternative control measures would be most handy for decision-makers, forming the grounds for an in-depth analysis of the most promising ones. This paper presents an easy-to-follow multicriteria methodological approach in order to include and synthesise multi-disciplinary knowledge from various stakeholders so as to result into a priority list of abatement options, achieve consensus and secure the adoption of the resulting optimal solution. The approach relies on the active involvement of public authorities and local stakeholders in order to incorporate their environmental, economic and social preferences. The methodological scheme is implemented for the case of Thessaloniki, Greece, an area considered among the most polluted cities within Europe, especially with respect to airborne particles. Intense police control, natural gas penetration in buildings and metro construction equally result into the most "promising" alternatives in order to control air pollution in the GTA. The three optimal alternatives belong to different thematic areas, namely road transport, thermal heating and infrastructure. Thus, it is obvious that efforts should spread throughout all thematic areas. Natural gas penetration in industrial units, intense monitoring of environmental standards and regular maintenance of heavy oil burners are ranked as 4th, 5th and 6th optimal alternatives, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
March-Leuba, J.A.
Nuclear plants of the 21st century will employ higher levels of automation and fault tolerance to increase availability, reduce accident risk, and lower operating costs. Key developments in control algorithms, fault diagnostics, fault tolerance, and communication in a distributed system are needed to implement the fully automated plant. Equally challenging will be integrating developments in separate information and control fields into a cohesive system, which collectively achieves the overall goals of improved performance, safety, reliability, maintainability, and cost-effectiveness. Under the Nuclear Energy Research Initiative (NERI), the U. S. Department of Energy is sponsoring a project to address some of themore » technical issues involved in meeting the long-range goal of 21st century reactor control systems. This project, ''A New Paradigm for Automated Development Of Highly Reliable Control Architectures For Future Nuclear Plants,'' involves researchers from Oak Ridge National Laboratory, University of Tennessee, and North Carolina State University. This paper documents a research effort to develop methods for automated generation of control systems that can be traced directly to the design requirements. Our final goal is to allow the designer to specify only high-level requirements and stress factors that the control system must survive (e.g. a list of transients, or a requirement to withstand a single failure.) To this end, the ''control engine'' automatically selects and validates control algorithms and parameters that are optimized to the current state of the plant, and that have been tested under the prescribed stress factors. The control engine then automatically generates the control software from validated algorithms. Examples of stress factors that the control system must ''survive'' are: transient events (e.g., set-point changes, or expected occurrences such a load rejection,) and postulated component failures. These stress factors are specified by the designer and become a database of prescribed transients and component failures. The candidate control systems are tested, and their parameters optimized, for each of these stresses. Examples of high-level requirements are: response time less than xx seconds, or overshoot less than xx% ... etc. In mathematical terms, these types of requirements are defined as ''constraints,'' and there are standard mathematical methods to minimize an objective function subject to constraints. Since, in principle, any control design that satisfies all the above constraints is acceptable, the designer must also select an objective function that describes the ''goodness'' of the control design. Examples of objective functions are: minimize the number or amount of control motions, minimize an energy balance... etc.« less
Radio frequency discharge with control of plasma potential distribution.
Dudnikov, Vadim; Dudnikov, A
2012-02-01
A RF discharge plasma generator with additional electrodes for independent control of plasma potential distribution is proposed. With positive biasing of this ring electrode relative end flanges and longitudinal magnetic field a confinement of fast electrons in the discharge will be improved for reliable triggering of pulsed RF discharge at low gas density and rate of ion generation will be enhanced. In the proposed discharge combination, the electron energy is enhanced by RF field and the fast electron confinement is improved by enhanced positive plasma potential which improves the efficiency of plasma generation significantly. This combination creates a synergetic effect with a significantly improving the plasma generation performance at low gas density. The discharge parameters can be optimized for enhance plasma generation with acceptable electrode sputtering.
System and method for leveraging human physiological traits to control microprocessor frequency
Shye, Alex; Pan, Yan; Scholbrock, Benjamin; Miller, J. Scott; Memik, Gokhan; Dinda, Peter A; Dick, Robert P
2014-03-25
A system and method for leveraging physiological traits to control microprocessor frequency are disclosed. In some embodiments, the system and method may optimize, for example, a particular processor-based architecture based on, for example, end user satisfaction. In some embodiments, the system and method may determine, for example, whether their users are satisfied to provide higher efficiency, improved reliability, reduced power consumption, increased security, and a better user experience. The system and method may use, for example, biometric input devices to provide information about a user's physiological traits to a computer system. Biometric input devices may include, for example, one or more of the following: an eye tracker, a galvanic skin response sensor, and/or a force sensor.
Reliability and paste process optimization of eutectic and lead-free for mixed packaging
NASA Technical Reports Server (NTRS)
Ramkumar, S. M.; Ganeshan, V.; Thenalur, K.; Ghaffarian, R.
2002-01-01
This paper reports the results of an experiment that utilized the JPL's area array consortium test vehicle design, containing a myriad of mixed technology components with an OSP finish. The details of the reliability study are presented in this paper.
Real-Time Load-Side Control of Electric Power Systems
NASA Astrophysics Data System (ADS)
Zhao, Changhong
Two trends are emerging from modern electric power systems: the growth of renewable (e.g., solar and wind) generation, and the integration of information technologies and advanced power electronics. The former introduces large, rapid, and random fluctuations in power supply, demand, frequency, and voltage, which become a major challenge for real-time operation of power systems. The latter creates a tremendous number of controllable intelligent endpoints such as smart buildings and appliances, electric vehicles, energy storage devices, and power electronic devices that can sense, compute, communicate, and actuate. Most of these endpoints are distributed on the load side of power systems, in contrast to traditional control resources such as centralized bulk generators. This thesis focuses on controlling power systems in real time, using these load side resources. Specifically, it studies two problems. (1) Distributed load-side frequency control: We establish a mathematical framework to design distributed frequency control algorithms for flexible electric loads. In this framework, we formulate a category of optimization problems, called optimal load control (OLC), to incorporate the goals of frequency control, such as balancing power supply and demand, restoring frequency to its nominal value, restoring inter-area power flows, etc., in a way that minimizes total disutility for the loads to participate in frequency control by deviating from their nominal power usage. By exploiting distributed algorithms to solve OLC and analyzing convergence of these algorithms, we design distributed load-side controllers and prove stability of closed-loop power systems governed by these controllers. This general framework is adapted and applied to different types of power systems described by different models, or to achieve different levels of control goals under different operation scenarios. We first consider a dynamically coherent power system which can be equivalently modeled with a single synchronous machine. We then extend our framework to a multi-machine power network, where we consider primary and secondary frequency controls, linear and nonlinear power flow models, and the interactions between generator dynamics and load control. (2) Two-timescale voltage control: The voltage of a power distribution system must be maintained closely around its nominal value in real time, even in the presence of highly volatile power supply or demand. For this purpose, we jointly control two types of reactive power sources: a capacitor operating at a slow timescale, and a power electronic device, such as a smart inverter or a D-STATCOM, operating at a fast timescale. Their control actions are solved from optimal power flow problems at two timescales. Specifically, the slow-timescale problem is a chance-constrained optimization, which minimizes power loss and regulates the voltage at the current time instant while limiting the probability of future voltage violations due to stochastic changes in power supply or demand. This control framework forms the basis of an optimal sizing problem, which determines the installation capacities of the control devices by minimizing the sum of power loss and capital cost. We develop computationally efficient heuristics to solve the optimal sizing problem and implement real-time control. Numerical experiments show that the proposed sizing and control schemes significantly improve the reliability of voltage control with a moderate increase in cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, L.; Rao, N.D.
1983-04-01
This paper presents a new method for optimal dispatch of real and reactive power generation which is based on cartesian coordinate formulation of economic dispatch problem and reclassification of state and control variables associated with generator buses. The voltage and power at these buses are classified as parametric and functional inequality constraints, and are handled by reduced gradient technique and penalty factor approach respectively. The advantage of this classification is the reduction in the size of the equality constraint model, leading to less storage requirement. The rectangular coordinate formulation results in an exact equality constraint model in which the coefficientmore » matrix is real, sparse, diagonally dominant, smaller in size and need be computed and factorized once only in each gradient step. In addition, Lagragian multipliers are calculated using a new efficient procedure. A natural outcome of these features is the solution of the economic dispatch problem, faster than other methods available to date in the literature. Rapid and reliable convergence is an additional desirable characteristic of the method. Digital simulation results are presented on several IEEE test systems to illustrate the range of application of the method visa-vis the popular Dommel-Tinney (DT) procedure. It is found that the proposed method is more reliable, 3-4 times faster and requires 20-30 percent less storage compared to the DT algorithm, while being just as general. Thus, owing to its exactness, robust mathematical model and less computational requirements, the method developed in the paper is shown to be a practically feasible algorithm for on-line optimal power dispatch.« less
A pragmatic decision model for inventory management with heterogeneous suppliers
NASA Astrophysics Data System (ADS)
Nakandala, Dilupa; Lau, Henry; Zhang, Jingjing; Gunasekaran, Angappa
2018-05-01
For enterprises, it is imperative that the trade-off between the cost of inventory and risk implications is managed in the most efficient manner. To explore this, we use the common example of a wholesaler operating in an environment where suppliers demonstrate heterogeneous reliability. The wholesaler has partial orders with dual suppliers and uses lateral transshipments. While supplier reliability is a key concern in inventory management, reliable suppliers are more expensive and investment in strategic approaches that improve supplier performance carries a high cost. Here we consider the operational strategy of dual sourcing with reliable and unreliable suppliers and model the total inventory cost where the likely scenario lead-time of the unreliable suppliers extends beyond the scheduling period. We then develop a Customized Integer Programming Optimization Model to determine the optimum size of partial orders with multiple suppliers. In addition to the objective of total cost optimization, this study takes into account the volatility of the cost associated with the uncertainty of an inventory system.
Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.
Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming
2016-08-01
In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.
NASA Astrophysics Data System (ADS)
Strunz, Richard; Herrmann, Jeffrey W.
2011-12-01
The hot fire test strategy for liquid rocket engines has always been a concern of space industry and agency alike because no recognized standard exists. Previous hot fire test plans focused on the verification of performance requirements but did not explicitly include reliability as a dimensioning variable. The stakeholders are, however, concerned about a hot fire test strategy that balances reliability, schedule, and affordability. A multiple criteria test planning model is presented that provides a framework to optimize the hot fire test strategy with respect to stakeholder concerns. The Staged Combustion Rocket Engine Demonstrator, a program of the European Space Agency, is used as example to provide the quantitative answer to the claim that a reduced thrust scale demonstrator is cost beneficial for a subsequent flight engine development. Scalability aspects of major subsystems are considered in the prior information definition inside the Bayesian framework. The model is also applied to assess the impact of an increase of the demonstrated reliability level on schedule and affordability.
Performance-based maintenance of gas turbines for reliable control of degraded power systems
NASA Astrophysics Data System (ADS)
Mo, Huadong; Sansavini, Giovanni; Xie, Min
2018-03-01
Maintenance actions are necessary for ensuring proper operations of control systems under component degradation. However, current condition-based maintenance (CBM) models based on component health indices are not suitable for degraded control systems. Indeed, failures of control systems are only determined by the controller outputs, and the feedback mechanism compensates the control performance loss caused by the component deterioration. Thus, control systems may still operate normally even if the component health indices exceed failure thresholds. This work investigates the CBM model of control systems and employs the reduced control performance as a direct degradation measure for deciding maintenance activities. The reduced control performance depends on the underlying component degradation modelled as a Wiener process and the feedback mechanism. To this aim, the controller features are quantified by developing a dynamic and stochastic control block diagram-based simulation model, consisting of the degraded components and the control mechanism. At each inspection, the system receives a maintenance action if the control performance deterioration exceeds its preventive-maintenance or failure thresholds. Inspired by realistic cases, the component degradation model considers random start time and unit-to-unit variability. The cost analysis of maintenance model is conducted via Monte Carlo simulation. Optimal maintenance strategies are investigated to minimize the expected maintenance costs, which is a direct consequence of the control performance. The proposed framework is able to design preventive maintenance actions on a gas power plant, to ensuring required load frequency control performance against a sudden load increase. The optimization results identify the trade-off between system downtime and maintenance costs as a function of preventive maintenance thresholds and inspection frequency. Finally, the control performance-based maintenance model can reduce maintenance costs as compared to CBM and pre-scheduled maintenance.
Lee, Hyung-Min; Ghovanloo, Maysam
2014-01-01
In this paper, we present a fully integrated active voltage doubler in CMOS technology using offset-controlled high speed comparators for extending the range of inductive power transmission to implantable microelectronic devices (IMD) and radio-frequency identification (RFID) tags. This active voltage doubler provides considerably higher power conversion efficiency (PCE) and lower dropout voltage compared to its passive counterpart and requires lower input voltage than active rectifiers, leading to reliable and efficient operation with weakly coupled inductive links. The offset-controlled functions in the comparators compensate for turn-on and turn-off delays to not only maximize the forward charging current to the load but also minimize the back current, optimizing PCE in the high frequency (HF) band. We fabricated the active voltage doubler in a 0.5-μm 3M2P std. CMOS process, occupying 0.144 mm2 of chip area. With 1.46 V peak AC input at 13.56 MHz, the active voltage doubler provides 2.4 V DC output across a 1 kΩ load, achieving the highest PCE = 79% ever reported at this frequency. In addition, the built-in start-up circuit ensures a reliable operation at lower voltages. PMID:23853321
Turbine blade tip clearance measurements using skewed dual optical beams of tip timing
NASA Astrophysics Data System (ADS)
Ye, De-chao; Duan, Fa-jie; Guo, Hao-tian; Li, Yangzong; Wang, Kai
2011-12-01
Optimization and active control of the clearance between turbine blades and case of the engine is identified, especially in aerospace community, as a key technology to increase engine efficiency, reduce fuel consumption and emissions and increase service life .However, the tip clearance varies during different operating conditions. Thus a reliable non-contact and online detection system is essential and ultimately used to close the tip clearance control loop. This paper described a fiber optical clearance measuring system applying skewed dual optical beams to detect the traverse time of passing blades. Two beams were specially designed with an outward angle of 18 degree and the beam spot diameters are less than 100μm within 0-4mm working range to achieve high signal-to-noise and high sensitivity. It could be theoretically analyzed that the measuring accuracy is not compromised by degradation of signal intensity caused by any number of environmental conditions such as light source instability, contamination and blade tip imperfection. Experimental tests were undertaken to achieve a high resolution of 10µm in the rotational speed range 2000-18000RPM and a measurement accuracy of 15μm, indicating that the system is capable of providing accurate and reliable data for active clearance control (ACC).
NASA Astrophysics Data System (ADS)
Bower, Ward
2011-09-01
An overview of the activities and progress made during the US DOE Solar Energy Grid Integration Systems (SEGIS) solicitation, while maintaining reliability and economics is provided. The SEGIS R&D opened pathways for interconnecting PV systems to intelligent utility grids and micro-grids of the future. In addition to new capabilities are "value added" features. The new hardware designs resulted in smaller, less material-intensive products that are being viewed by utilities as enabling dispatchable generation and not just unpredictable negative loads. The technical solutions enable "advanced integrated system" concepts and "smart grid" processes to move forward in a faster and focused manner. The advanced integrated inverters/controllers can now incorporate energy management functionality, intelligent electrical grid support features and a multiplicity of communication technologies. Portals for energy flow and two-way communications have been implemented. SEGIS hardware was developed for the utility grid of today, which was designed for one-way power flow, for intermediate grid scenarios, AND for the grid of tomorrow, which will seamlessly accommodate managed two-way power flows as required by large-scale deployment of solar and other distributed generation. The SEGIS hardware and control developed for today meets existing standards and codes AND provides for future connections to a "smart grid" mode that enables utility control and optimized performance.
NASA Technical Reports Server (NTRS)
Brenner, Richard; Lala, Jaynarayan H.; Nagle, Gail A.; Schor, Andrei; Turkovich, John
1994-01-01
This program demonstrated the integration of a number of technologies that can increase the availability and reliability of launch vehicles while lowering costs. Availability is increased with an advanced guidance algorithm that adapts trajectories in real-time. Reliability is increased with fault-tolerant computers and communication protocols. Costs are reduced by automatically generating code and documentation. This program was realized through the cooperative efforts of academia, industry, and government. The NASA-LaRC coordinated the effort, while Draper performed the integration. Georgia Institute of Technology supplied a weak Hamiltonian finite element method for optimal control problems. Martin Marietta used MATLAB to apply this method to a launch vehicle (FENOC). Draper supplied the fault-tolerant computing and software automation technology. The fault-tolerant technology includes sequential and parallel fault-tolerant processors (FTP & FTPP) and authentication protocols (AP) for communication. Fault-tolerant technology was incrementally incorporated. Development culminated with a heterogeneous network of workstations and fault-tolerant computers using AP. Draper's software automation system, ASTER, was used to specify a static guidance system based on FENOC, navigation, flight control (GN&C), models, and the interface to a user interface for mission control. ASTER generated Ada code for GN&C and C code for models. An algebraic transform engine (ATE) was developed to automatically translate MATLAB scripts into ASTER.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rah, Jeong-Eun; Oh, Do Hoon; Shin, Dongho
Purpose: To evaluate and improve the reliability of proton quality assurance (QA) processes and, to provide an optimal customized tolerance level using the statistical process control (SPC) methodology. Methods: The authors investigated the consistency check of dose per monitor unit (D/MU) and range in proton beams to see whether it was within the tolerance level of the daily QA process. This study analyzed the difference between the measured and calculated ranges along the central axis to improve the patient-specific QA process in proton beams by using process capability indices. Results: The authors established a customized tolerance level of ±2% formore » D/MU and ±0.5 mm for beam range in the daily proton QA process. In the authors’ analysis of the process capability indices, the patient-specific range measurements were capable of a specification limit of ±2% in clinical plans. Conclusions: SPC methodology is a useful tool for customizing the optimal QA tolerance levels and improving the quality of proton machine maintenance, treatment delivery, and ultimately patient safety.« less
[Quality by design approaches for pharmaceutical development and manufacturing of Chinese medicine].
Xu, Bing; Shi, Xin-Yuan; Wu, Zhi-Sheng; Zhang, Yan-Ling; Wang, Yun; Qiao, Yan-Jiang
2017-03-01
The pharmaceutical quality was built by design, formed in the manufacturing process and improved during the product's lifecycle. Based on the comprehensive literature review of pharmaceutical quality by design (QbD), the essential ideas and implementation strategies of pharmaceutical QbD were interpreted. Considering the complex nature of Chinese medicine, the "4H" model was innovated and proposed for implementing QbD in pharmaceutical development and industrial manufacture of Chinese medicine product. "4H" corresponds to the acronym of holistic design, holistic information analysis, holistic quality control, and holistic process optimization, which is consistent with the holistic concept of Chinese medicine theory. The holistic design aims at constructing both the quality problem space from the patient requirement and the quality solution space from multidisciplinary knowledge. Holistic information analysis emphasizes understanding the quality pattern of Chinese medicine by integrating and mining multisource data and information at a relatively high level. The batch-to-batch quality consistence and manufacturing system reliability can be realized by comprehensive application of inspective quality control, statistical quality control, predictive quality control and intelligent quality control strategies. Holistic process optimization is to improve the product quality and process capability during the product lifecycle management. The implementation of QbD is useful to eliminate the ecosystem contradictions lying in the pharmaceutical development and manufacturing process of Chinese medicine product, and helps guarantee the cost effectiveness. Copyright© by the Chinese Pharmaceutical Association.
Mission Data System Java Edition Version 7
NASA Technical Reports Server (NTRS)
Reinholtz, William K.; Wagner, David A.
2013-01-01
The Mission Data System framework defines closed-loop control system abstractions from State Analysis including interfaces for state variables, goals, estimators, and controllers that can be adapted to implement a goal-oriented control system. The framework further provides an execution environment that includes a goal scheduler, execution engine, and fault monitor that support the expression of goal network activity plans. Using these frameworks, adapters can build a goal-oriented control system where activity coordination is verified before execution begins (plan time), and continually during execution. Plan failures including violations of safety constraints expressed in the plan can be handled through automatic re-planning. This version optimizes a number of key interfaces and features to minimize dependencies, performance overhead, and improve reliability. Fault diagnosis and real-time projection capabilities are incorporated. This version enhances earlier versions primarily through optimizations and quality improvements that raise the technology readiness level. Goals explicitly constrain system states over explicit time intervals to eliminate ambiguity about intent, as compared to command-oriented control that only implies persistent intent until another command is sent. A goal network scheduling and verification process ensures that all goals in the plan are achievable before starting execution. Goal failures at runtime can be detected (including predicted failures) and handled by adapted response logic. Responses can include plan repairs (try an alternate tactic to achieve the same goal), goal shedding, ignoring the fault, cancelling the plan, or safing the system.
On the design of high-rise buildings with a specified level of reliability
NASA Astrophysics Data System (ADS)
Dolganov, Andrey; Kagan, Pavel
2018-03-01
High-rise buildings have a specificity, which significantly distinguishes them from traditional buildings of high-rise and multi-storey buildings. Steel structures in high-rise buildings are advisable to be used in earthquake-proof regions, since steel, due to its plasticity, provides damping of the kinetic energy of seismic impacts. These aspects should be taken into account when choosing a structural scheme of a high-rise building and designing load-bearing structures. Currently, modern regulatory documents do not quantify the reliability of structures. Although the problem of assigning an optimal level of reliability has existed for a long time. The article shows the possibility of designing metal structures of high-rise buildings with specified reliability. Currently, modern regulatory documents do not quantify the reliability of high-rise buildings. Although the problem of assigning an optimal level of reliability has existed for a long time. It is proposed to establish the value of reliability 0.99865 (3σ) for constructions of buildings and structures of a normal level of responsibility in calculations for the first group of limiting states. For increased (construction of high-rise buildings) and reduced levels of responsibility for the provision of load-bearing capacity, it is proposed to assign respectively 0.99997 (4σ) and 0.97725 (2σ). The coefficients of the use of the cross section of a metal beam for different levels of security are given.
Optimal reinforcement of training datasets in semi-supervised landmark-based segmentation
NASA Astrophysics Data System (ADS)
Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž
2015-03-01
During the last couple of decades, the development of computerized image segmentation shifted from unsupervised to supervised methods, which made segmentation results more accurate and robust. However, the main disadvantage of supervised segmentation is a need for manual image annotation that is time-consuming and subjected to human error. To reduce the need for manual annotation, we propose a novel learning approach for training dataset reinforcement in the area of landmark-based segmentation, where newly detected landmarks are optimally combined with reference landmarks from the training dataset and therefore enriches the training process. The approach is formulated as a nonlinear optimization problem, where the solution is a vector of weighting factors that measures how reliable are the detected landmarks. The detected landmarks that are found to be more reliable are included into the training procedure with higher weighting factors, whereas the detected landmarks that are found to be less reliable are included with lower weighting factors. The approach is integrated into the landmark-based game-theoretic segmentation framework and validated against the problem of lung field segmentation from chest radiographs.
Reliable actuators for twin rotor MIMO system
NASA Astrophysics Data System (ADS)
Rao, Vidya S.; V. I, George; Kamath, Surekha; Shreesha, C.
2017-11-01
Twin Rotor MIMO System (TRMS) is a bench mark system to test flight control algorithms. One of the perturbations on TRMS which is likely to affect the control system is actuator failure. Therefore, there is a need for a reliable control system, which includes H infinity controller along with redundant actuators. Reliable control refers to the design of a control system to tolerate failures of a certain set of actuators or sensors while retaining desired control system properties. Output of reliable controller has to be transferred to the redundant actuator effectively to make the TRMS reliable even under actual actuator failure.
Topology and boundary shape optimization as an integrated design tool
NASA Technical Reports Server (NTRS)
Bendsoe, Martin Philip; Rodrigues, Helder Carrico
1990-01-01
The optimal topology of a two dimensional linear elastic body can be computed by regarding the body as a domain of the plane with a high density of material. Such an optimal topology can then be used as the basis for a shape optimization method that computes the optimal form of the boundary curves of the body. This results in an efficient and reliable design tool, which can be implemented via common FEM mesh generator and CAD type input-output facilities.
Fuzzy probabilistic design of water distribution networks
NASA Astrophysics Data System (ADS)
Fu, Guangtao; Kapelan, Zoran
2011-05-01
The primary aim of this paper is to present a fuzzy probabilistic approach for optimal design and rehabilitation of water distribution systems, combining aleatoric and epistemic uncertainties in a unified framework. The randomness and imprecision in future water consumption are characterized using fuzzy random variables whose realizations are not real but fuzzy numbers, and the nodal head requirements are represented by fuzzy sets, reflecting the imprecision in customers' requirements. The optimal design problem is formulated as a two-objective optimization problem, with minimization of total design cost and maximization of system performance as objectives. The system performance is measured by the fuzzy random reliability, defined as the probability that the fuzzy head requirements are satisfied across all network nodes. The satisfactory degree is represented by necessity measure or belief measure in the sense of the Dempster-Shafer theory of evidence. An efficient algorithm is proposed, within a Monte Carlo procedure, to calculate the fuzzy random system reliability and is effectively combined with the nondominated sorting genetic algorithm II (NSGAII) to derive the Pareto optimal design solutions. The newly proposed methodology is demonstrated with two case studies: the New York tunnels network and Hanoi network. The results from both cases indicate that the new methodology can effectively accommodate and handle various aleatoric and epistemic uncertainty sources arising from the design process and can provide optimal design solutions that are not only cost-effective but also have higher reliability to cope with severe future uncertainties.
Numerical Optimization Algorithms and Software for Systems Biology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saunders, Michael
2013-02-02
The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.
Jambor, Ivan; Merisaari, Harri; Aronen, Hannu J; Järvinen, Jukka; Saunavaara, Jani; Kauko, Tommi; Borra, Ronald; Pesola, Marko
2014-05-01
To determine the optimal b-value distribution for biexponential diffusion-weighted imaging (DWI) of normal prostate using both a computer modeling approach and in vivo measurements. Optimal b-value distributions for the fit of three parameters (fast diffusion Df, slow diffusion Ds, and fraction of fast diffusion f) were determined using Monte-Carlo simulations. The optimal b-value distribution was calculated using four individual optimization methods. Eight healthy volunteers underwent four repeated 3 Tesla prostate DWI scans using both 16 equally distributed b-values and an optimized b-value distribution obtained from the simulations. The b-value distributions were compared in terms of measurement reliability and repeatability using Shrout-Fleiss analysis. Using low noise levels, the optimal b-value distribution formed three separate clusters at low (0-400 s/mm2), mid-range (650-1200 s/mm2), and high b-values (1700-2000 s/mm2). Higher noise levels resulted into less pronounced clustering of b-values. The clustered optimized b-value distribution demonstrated better measurement reliability and repeatability in Shrout-Fleiss analysis compared with 16 equally distributed b-values. The optimal b-value distribution was found to be a clustered distribution with b-values concentrated in the low, mid, and high ranges and was shown to improve the estimation quality of biexponential DWI parameters of in vivo experiments. Copyright © 2013 Wiley Periodicals, Inc.
System Risk Assessment and Allocation in Conceptual Design
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Smith, Natasha L.; Zang, Thomas A. (Technical Monitor)
2003-01-01
As aerospace systems continue to evolve in addressing newer challenges in air and space transportation, there exists a heightened priority for significant improvement in system performance, cost effectiveness, reliability, and safety. Tools, which synthesize multidisciplinary integration, probabilistic analysis, and optimization, are needed to facilitate design decisions allowing trade-offs between cost and reliability. This study investigates tools for probabilistic analysis and probabilistic optimization in the multidisciplinary design of aerospace systems. A probabilistic optimization methodology is demonstrated for the low-fidelity design of a reusable launch vehicle at two levels, a global geometry design and a local tank design. Probabilistic analysis is performed on a high fidelity analysis of a Navy missile system. Furthermore, decoupling strategies are introduced to reduce the computational effort required for multidisciplinary systems with feedback coupling.
DOT National Transportation Integrated Search
2010-12-01
The purpose of this qualitative case study was to identify the types of obstacles and patterns experienced by a single heavy rail transit agency located in North America that embedded a Reliability Centered Maintenance (RCM) Process. The outcome of t...
Chemical vapor deposition modeling: An assessment of current status
NASA Technical Reports Server (NTRS)
Gokoglu, Suleyman A.
1991-01-01
The shortcomings of earlier approaches that assumed thermochemical equilibrium and used chemical vapor deposition (CVD) phase diagrams are pointed out. Significant advancements in predictive capabilities due to recent computational developments, especially those for deposition rates controlled by gas phase mass transport, are demonstrated. The importance of using the proper boundary conditions is stressed, and the availability and reliability of gas phase and surface chemical kinetic information are emphasized as the most limiting factors. Future directions for CVD are proposed on the basis of current needs for efficient and effective progress in CVD process design and optimization.
Advances and trends in computational structural mechanics
NASA Technical Reports Server (NTRS)
Noor, A. K.
1986-01-01
Recent developments in computational structural mechanics are reviewed with reference to computational needs for future structures technology, advances in computational models for material behavior, discrete element technology, assessment and control of numerical simulations of structural response, hybrid analysis, and techniques for large-scale optimization. Research areas in computational structural mechanics which have high potential for meeting future technological needs are identified. These include prediction and analysis of the failure of structural components made of new materials, development of computational strategies and solution methodologies for large-scale structural calculations, and assessment of reliability and adaptive improvement of response predictions.
Instrument for Measuring Thermal Conductivity of Materials at Low Temperatures
NASA Technical Reports Server (NTRS)
Fesmire, James; Sass, Jared; Johnson, Wesley
2010-01-01
With the advance of polymer and other non-metallic material sciences, whole new series of polymeric materials and composites are being created. These materials are being optimized for many different applications including cryogenic and low-temperature industrial processes. Engineers need these data to perform detailed system designs and enable new design possibilities for improved control, reliability, and efficiency in specific applications. One main area of interest is cryogenic structural elements and fluid handling components and other parts, films, and coatings for low-temperature application. An important thermal property of these new materials is the apparent thermal conductivity (k-value).
Note: Cryogenic heat switch with stepper motor actuator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melcher, B. S., E-mail: bsmelche@syr.edu; Timbie, P. T., E-mail: pttimbie@wisc.edu
2015-12-15
A mechanical cryogenic heat switch has been developed using a commercially available stepper motor and control electronics. The motor requires 4 leads, each carrying a maximum, pulsed current of 0.5 A. With slight modifications of the stepper motor, the switch functions reliably in vacuum at temperatures between 300 K and 4 K. The switch generates a clamping force of 262 N at room temperature. At 4 K it achieves an “on state” thermal conductance of 5.04 mW/K and no conductance in the “off state.” The switch is optimized for cycling an adiabatic demagnetization refrigerator.
Integrated optimization of nonlinear R/C frames with reliability constraints
NASA Technical Reports Server (NTRS)
Soeiro, Alfredo; Hoit, Marc
1989-01-01
A structural optimization algorithm was researched including global displacements as decision variables. The algorithm was applied to planar reinforced concrete frames with nonlinear material behavior submitted to static loading. The flexural performance of the elements was evaluated as a function of the actual stress-strain diagrams of the materials. Formation of rotational hinges with strain hardening were allowed and the equilibrium constraints were updated accordingly. The adequacy of the frames was guaranteed by imposing as constraints required reliability indices for the members, maximum global displacements for the structure and a maximum system probability of failure.
Reliability Impacts in Life Support Architecture and Technology Selection
NASA Technical Reports Server (NTRS)
Lange Kevin E.; Anderson, Molly S.
2012-01-01
Quantitative assessments of system reliability and equivalent system mass (ESM) were made for different life support architectures based primarily on International Space Station technologies. The analysis was applied to a one-year deep-space mission. System reliability was increased by adding redundancy and spares, which added to the ESM. Results were thus obtained allowing a comparison of the ESM for each architecture at equivalent levels of reliability. Although the analysis contains numerous simplifications and uncertainties, the results suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support ESM and could influence the optimal degree of life support closure. Approaches for reducing reliability impacts were investigated and are discussed.
NASA Astrophysics Data System (ADS)
Dharmalingam, Gnanaprakash
The monitoring of polluting gases such as CO and NOx emitted from gas turbines in power plants and aircraft is important in order to both reduce the effects of such gases on the environment as well as to optimize the performance of the respective power system. The need for emissions monitoring systems is further realized from increased regulatory requirements that are being instituted as a result of the environmental impact from increased air travel. Specifically, it is estimated that the contributions from aircraft emissions to total NOx emissions will increase from 4% to 17% between 2008 and 2020. Extensive fuel cost savings as well as a reduced environmental impact would therefore be realized if this increased air traffic utilized next generation jet turbines which used a emission/performance control sensing system. These future emissions monitoring systems must be sensitive and selective to the emission gases, reliable and stable under harsh environmental conditions where the operation temperatures are in excess of 500 °C within a highly reactive environment. Plasmonics based chemical sensors which use nanocomposites comprised of a combination of gold nano particles and Yttria Stabilized Zirconia (YSZ) has enabled the sensitive (PPM) and stable detection (100s of hrs) of H2, NO2 and CO at temperatures of 500 °C. The detection method involves measuring the change in the localized Surface Plasmon Resonance (LSPR) characteristics of the Au- YSZ nano composite and in particular, the plasmon peak position. Selectivity remains a challenging parameter to optimize and a layer by layer sputter deposition approach has been recently demonstrated to modify the resulting sensing properties through a change in the morphology of the deposited films. The material properties of the films have produced a unique sensing behavior in terms of a preferential response to H2 compared to CO. Although this is a very good benefit, it is expected that further enhancements would be realized through control of the shape and geometry of the catalytically active Au nanoparticles. While this is not possible through the layer by layer sputter deposition approach, this level of control has been realized through the use of electron beam lithography to fabricate nanocomposite arrays. Sensing results towards the detection of H2 will be highlighted with specific concerns related to optimization of these nanorod arrays detailed. The proposed work will discuss the various parameters for optimization of these arrays, which would enable them to be used as reliable, sensitive and selective harsh environmental sensors.
NASA Astrophysics Data System (ADS)
Fan, Xiao-Ning; Zhi, Bo
2017-07-01
Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.
Automated Testability Decision Tool
1991-09-01
Vol. 16,1968, pp. 538-558. Bertsekas, D. P., "Constraints Optimization and Lagrange Multiplier Methods," Academic Press, New York. McLeavey , D.W... McLeavey , J.A., "Parallel Optimization Methods in Standby Reliability, " University of Connecticut, School of Business Administration, Bureau of Business
Subsystem Analysis/Optimization for the X-34 Main Propulsion System
NASA Technical Reports Server (NTRS)
McDonald, J. P.; Hedayat, A.; Brown, T. M.; Knight, K. C.; Champion, R. H., Jr.
1998-01-01
The Orbital Sciences Corporation X-34 vehicle demonstrates technologies and operations key to future reusable launch vehicles. The general flight performance goal of this unmanned rocket plane is Mach 8 flight at an altitude of 250,000 feet. The Main Propulsion System (MPS) supplies liquid propellants to the main engine, which provides the primary thrust for attaining mission goals. Major MPS design and operational goals are aircraft-like ground operations, quick turnaround between missions, and low initial/operational costs. Analyses related to optimal MPS subsystem design are reviewed in this paper. A pressurization system trade weighs maintenance/reliability concerns against those for safety in a comparison of designs using pressure regulators versus orifices to control pressurant flow. A propellant dump/feed system analysis weighs the issues of maximum allowable vehicle landing weight, trajectory, and MPS complexity to arrive at a final configuration for propellant dump/feed systems.
Challenges in the management of breast cancer in low- and middle-income countries.
Yip, Cheng-Har; Taib, Nur Aishah
2012-12-01
The incidence of breast cancer is rising in low- and middle-income countries (LMICs) due to 'westernization' of risk factors for developing breast cancer. However, survival remains low because of barriers in early detection and optimal access to treatment, which are the two main determinants of breast cancer outcome. A multidisciplinary approach to treatment gives the best results. An accurate diagnosis is dependent on a reliable pathology service, which will provide an adequate pathology report with prognostic and predictor information to allow optimal oncological treatment. Stratification of clinical practice guidelines based on resource level will ensure that women will have access to treatment even in a low-resource setting. Advocacy and civil society play a role in galvanizing the political will required to meet the challenge of providing opportunities for breast cancer control in LMICs. Collaboration between high-income countries and LMICs could be a strategy in facing these challenges.
Introduction of the ASGARD code (Automated Selection and Grouping of events in AIA Regional Data)
NASA Astrophysics Data System (ADS)
Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv K.; Fayock, Brian
2017-08-01
We have developed the ASGARD code to automatically detect and group brightenings ("events") in AIA data. The event selection and grouping can be optimized to the respective dataset with a multitude of control parameters. The code was initially written for IRIS data, but has since been optimized for AIA. However, the underlying algorithm is not limited to either and could be used for other data as well.Results from datasets in various AIA channels show that brightenings are reliably detected and that coherent coronal structures can be isolated by using the obtained information about the start, peak, and end times of events. We are presently working on a follow-up algorithm to automatically determine the heating and cooling timescales of coronal structures. This will be done by correlating the information from different AIA channels with different temperature responses. We will present the code and preliminary results.
Improving vaccination cold chain in the general practice setting.
Page, Sue L; Earnest, Arul; Birden, Hudson; Deaker, Rachelle; Clark, Chris
2008-10-01
This study compared temperature control in different types of vaccine storing refrigerators in general practice and tested knowledge of general practice staff in vaccine storage requirements. Temperature data loggers were set to serially record the temperature within vaccine refrigerators in 28 general practices, recording at 12 minute intervals over a period of 10 days on each occasion. A survey of vaccine storage knowledge and records of divisions of general practice immunisation contacts were also obtained. There was a significant relationship between type of refrigerator and optimal temperature, with the odds ratio for bar style refrigerator being 0.005 (95% CI: 0.001-0.044) compared to the purpose built vaccine refrigerators. Score on a survey of vaccine storage was also positively associated with optimal storage temperature. General practices that invest in purpose built vaccine refrigerators will achieve standards of vaccine cold chain maintenance significantly more reliably than can be achieved through regular cold chain monitoring and practice supports.
Requirements analysis and preliminary design of a robotic assistant for reconstructive microsurgery.
Vanthournhout, L; Herman, B; Duisit, J; Château, F; Szewczyk, J; Lengelé, B; Raucent, B
2015-08-01
Microanastomosis is a microsurgical gesture that involves suturing two very small blood vessels together. This gesture is used in many operations such as avulsed member auto-grafting, pediatric surgery, reconstructive surgery - including breast reconstruction by free flap. When vessels have diameters smaller than one millimeter, hand tremors make movements difficult to control. This paper introduces our preliminary steps towards robotic assistance for helping surgeons to perform microanastomosis in optimal conditions, in order to increase gesture quality and reliability even on smaller diameters. A general needs assessment and an experimental motion analysis were performed to define the requirements of the robot. Geometric parameters of the kinematic structure were then optimized to fulfill specific objectives. A prototype of the robot is currently being designed and built in order to providing a sufficient increase in accuracy without prolonging the duration of the procedure.
Han, Jian; Liu, Juan; Yao, Xincheng; Wang, Yongtian
2015-02-09
A compact waveguide display system integrating freeform elements and volume holograms is presented here for the first time. The use of freeform elements can broaden the field of view, which limits the applications of a holographic waveguide. An optimized system can achieve a diagonal field of view of 45° when the thickness of the waveguide planar is 3mm. Freeform-elements in-coupler and the volume holograms out-coupler were designed in detail in our study, and the influence of grating configurations on diffraction efficiency was analyzed thoroughly. The off-axis aberrations were well compensated by the in-coupler and the diffraction efficiency of the optimized waveguide display system could reach 87.57%. With integrated design, stability and reliability of this monochromatic display system were achieved and the alignment of the system was easily controlled by the record of the volume holograms, which makes mass production possible.
Han, Jian; Liu, Juan; Yao, Xincheng; Wang, Yongtian
2015-01-01
A compact waveguide display system integrating freeform elements and volume holograms is presented here for the first time. The use of freeform elements can broaden the field of view, which limits the applications of a holographic waveguide. An optimized system can achieve a diagonal field of view of 45° when the thickness of the waveguide planar is 3mm. Freeform-elements in-coupler and the volume holograms out-coupler were designed in detail in our study, and the influence of grating configurations on diffraction efficiency was analyzed thoroughly. The off-axis aberrations were well compensated by the in-coupler and the diffraction efficiency of the optimized waveguide display system could reach 87.57%. With integrated design, stability and reliability of this monochromatic display system were achieved and the alignment of the system was easily controlled by the record of the volume holograms, which makes mass production possible. PMID:25836207
Behaviour State Analysis in Rett Syndrome: Continuous Data Reliability Measurement
ERIC Educational Resources Information Center
Woodyatt, Gail; Marinac, Julie; Darnell, Ross; Sigafoos, Jeff; Halle, James
2004-01-01
Awareness of optimal behaviour states of children with profound intellectual disability has been reported in the literature as a potentially useful tool for planning intervention within this population. Some arguments have been raised, however, which question the reliability and validity of previously published work on behaviour state analysis.…
NASA Astrophysics Data System (ADS)
Wang, Ping; Wu, Guangqiang
2013-03-01
Typical multidisciplinary design optimization(MDO) has gradually been proposed to balance performances of lightweight, noise, vibration and harshness(NVH) and safety for instrument panel(IP) structure in the automotive development. Nevertheless, plastic constitutive relation of Polypropylene(PP) under different strain rates, has not been taken into consideration in current reliability-based and collaborative IP MDO design. In this paper, based on tensile test under different strain rates, the constitutive relation of Polypropylene material is studied. Impact simulation tests for head and knee bolster are carried out to meet the regulation of FMVSS 201 and FMVSS 208, respectively. NVH analysis is performed to obtain mainly the natural frequencies and corresponding mode shapes, while the crashworthiness analysis is employed to examine the crash behavior of IP structure. With the consideration of lightweight, NVH, head and knee bolster impact performance, design of experiment(DOE), response surface model(RSM), and collaborative optimization(CO) are applied to realize the determined and reliability-based optimizations, respectively. Furthermore, based on multi-objective genetic algorithm(MOGA), the optimal Pareto sets are completed to solve the multi-objective optimization(MOO) problem. The proposed research ensures the smoothness of Pareto set, enhances the ability of engineers to make a comprehensive decision about multi-objectives and choose the optimal design, and improves the quality and efficiency of MDO.
Lociciro, S; Esseiva, P; Hayoz, P; Dujourdy, L; Besacier, F; Margot, P
2008-05-20
Harmonisation and optimization of analytical and statistical methodologies were carried out between two forensic laboratories (Lausanne, Switzerland and Lyon, France) in order to provide drug intelligence for cross-border cocaine seizures. Part I dealt with the optimization of the analytical method and its robustness. This second part investigates statistical methodologies that will provide reliable comparison of cocaine seizures analysed on two different gas chromatographs interfaced with a flame ionisation detectors (GC-FIDs) in two distinct laboratories. Sixty-six statistical combinations (ten data pre-treatments followed by six different distance measurements and correlation coefficients) were applied. One pre-treatment (N+S: area of each peak is divided by its standard deviation calculated from the whole data set) followed by the Cosine or Pearson correlation coefficients were found to be the best statistical compromise for optimal discrimination of linked and non-linked samples. The centralisation of the analyses in one single laboratory is not a required condition anymore to compare samples seized in different countries. This allows collaboration, but also, jurisdictional control over data.
More efficient optimization of long-term water supply portfolios
NASA Astrophysics Data System (ADS)
Kirsch, Brian R.; Characklis, Gregory W.; Dillard, Karen E. M.; Kelley, C. T.
2009-03-01
The use of temporary transfers, such as options and leases, has grown as utilities attempt to meet increases in demand while reducing dependence on the expansion of costly infrastructure capacity (e.g., reservoirs). Earlier work has been done to construct optimal portfolios comprising firm capacity and transfers, using decision rules that determine the timing and volume of transfers. However, such work has only focused on the short-term (e.g., 1-year scenarios), which limits the utility of these planning efforts. Developing multiyear portfolios can lead to the exploration of a wider range of alternatives but also increases the computational burden. This work utilizes a coupled hydrologic-economic model to simulate the long-term performance of a city's water supply portfolio. This stochastic model is linked with an optimization search algorithm that is designed to handle the high-frequency, low-amplitude noise inherent in many simulations, particularly those involving expected values. This noise is detrimental to the accuracy and precision of the optimized solution and has traditionally been controlled by investing greater computational effort in the simulation. However, the increased computational effort can be substantial. This work describes the integration of a variance reduction technique (control variate method) within the simulation/optimization as a means of more efficiently identifying minimum cost portfolios. Random variation in model output (i.e., noise) is moderated using knowledge of random variations in stochastic input variables (e.g., reservoir inflows, demand), thereby reducing the computing time by 50% or more. Using these efficiency gains, water supply portfolios are evaluated over a 10-year period in order to assess their ability to reduce costs and adapt to demand growth, while still meeting reliability goals. As a part of the evaluation, several multiyear option contract structures are explored and compared.
Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.
Herzallah, Randa
2015-03-01
Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.
Carius, Lisa; Rumschinski, Philipp; Faulwasser, Timm; Flockerzi, Dietrich; Grammel, Hartmut; Findeisen, Rolf
2014-04-01
Microaerobic (oxygen-limited) conditions are critical for inducing many important microbial processes in industrial or environmental applications. At very low oxygen concentrations, however, the process performance often suffers from technical limitations. Available dissolved oxygen measurement techniques are not sensitive enough and thus control techniques, that can reliable handle these conditions, are lacking. Recently, we proposed a microaerobic process control strategy, which overcomes these restrictions and allows to assess different degrees of oxygen limitation in bioreactor batch cultivations. Here, we focus on the design of a control strategy for the automation of oxygen-limited continuous cultures using the microaerobic formation of photosynthetic membranes (PM) in Rhodospirillum rubrum as model phenomenon. We draw upon R. rubrum since the considered phenomenon depends on the optimal availability of mixed-carbon sources, hence on boundary conditions which make the process performance challenging. Empirically assessing these specific microaerobic conditions is scarcely practicable as such a process reacts highly sensitive to changes in the substrate composition and the oxygen availability in the culture broth. Therefore, we propose a model-based process control strategy which allows to stabilize steady-states of cultures grown under these conditions. As designing the appropriate strategy requires a detailed knowledge of the system behavior, we begin by deriving and validating an unstructured process model. This model is used to optimize the experimental conditions, and identify properties of the system which are critical for process performance. The derived model facilitates the good process performance via the proposed optimal control strategy. In summary the presented model-based control strategy allows to access and maintain microaerobic steady-states of interest and to precisely and efficiently transfer the culture from one stable microaerobic steady-state into another. Therefore, the presented approach is a valuable tool to study regulatory mechanisms of microaerobic phenomena in response to oxygen limitation alone. Biotechnol. Bioeng. 2014;111: 734-747. © 2013 Wiley Periodicals, Inc. © 2013 Wiley Periodicals, Inc.
Reliability of Fault Tolerant Control Systems. Part 1
NASA Technical Reports Server (NTRS)
Wu, N. Eva
2001-01-01
This paper reports Part I of a two part effort, that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability analysis of fault-tolerant control systems is performed using Markov models. Reliability properties, peculiar to fault-tolerant control systems are emphasized. As a consequence, coverage of failures through redundancy management can be severely limited. It is shown that in the early life of a syi1ein composed of highly reliable subsystems, the reliability of the overall system is affine with respect to coverage, and inadequate coverage induces dominant single point failures. The utility of some existing software tools for assessing the reliability of fault tolerant control systems is also discussed. Coverage modeling is attempted in Part II in a way that captures its dependence on the control performance and on the diagnostic resolution.
High-power VCSEL systems and applications
NASA Astrophysics Data System (ADS)
Moench, Holger; Conrads, Ralf; Deppe, Carsten; Derra, Guenther; Gronenborn, Stephan; Gu, Xi; Heusler, Gero; Kolb, Johanna; Miller, Michael; Pekarski, Pavel; Pollmann-Retsch, Jens; Pruijmboom, Armand; Weichmann, Ulrich
2015-03-01
Easy system design, compactness and a uniform power distribution define the basic advantages of high power VCSEL systems. Full addressability in space and time add new dimensions for optimization and enable "digital photonic production". Many thermal processes benefit from the improved control i.e. heat is applied exactly where and when it is needed. The compact VCSEL systems can be integrated into most manufacturing equipment, replacing batch processes using large furnaces and reducing energy consumption. This paper will present how recent technological development of high power VCSEL systems will extend efficiency and flexibility of thermal processes and replace not only laser systems, lamps and furnaces but enable new ways of production. High power VCSEL systems are made from many VCSEL chips, each comprising thousands of low power VCSELs. Systems scalable in power from watts to multiple ten kilowatts and with various form factors utilize a common modular building block concept. Designs for reliable high power VCSEL arrays and systems can be developed and tested on each building block level and benefit from the low power density and excellent reliability of the VCSELs. Furthermore advanced assembly concepts aim to reduce the number of individual processes and components and make the whole system even more simple and reliable.
How Reliable is the Acetabular Cup Position Assessment from Routine Radiographs?
Carvajal Alba, Jaime A.; Vincent, Heather K.; Sodhi, Jagdeep S.; Latta, Loren L.; Parvataneni, Hari K.
2017-01-01
Abstract Background: Cup position is crucial for optimal outcomes in total hip arthroplasty. Radiographic assessment of component position is routinely performed in the early postoperative period. Aims: The aims of this study were to determine in a controlled environment if routine radiographic methods accurately and reliably assess the acetabular cup position and to assess if there is a statistical difference related to the rater’s level of training. Methods: A pelvic model was mounted in a spatial frame. An acetabular cup was fixed in different degrees of version and inclination. Standardized radiographs were obtained. Ten observers including five fellowship-trained orthopaedic surgeons and five orthopaedic residents performed a blind assessment of cup position. Inclination was assessed from anteroposterior radiographs of the pelvis and version from cross-table lateral radiographs of the hip. Results: The radiographic methods used showed to be imprecise specially when the cup was positioned at the extremes of version and inclination. An excellent inter-observer reliability (Intra-class coefficient > 0,9) was evidenced. There were no differences related to the level of training of the raters. Conclusions: These widely used radiographic methods should be interpreted cautiously and computed tomography should be utilized in cases when further intervention is contemplated. PMID:28852355
Development and testing of the cancer multidisciplinary team meeting observational tool (MDT-MOT)
Harris, Jenny; Taylor, Cath; Sevdalis, Nick; Jalil, Rozh; Green, James S.A.
2016-01-01
Abstract Objective To develop a tool for independent observational assessment of cancer multidisciplinary team meetings (MDMs), and test criterion validity, inter-rater reliability/agreement and describe performance. Design Clinicians and experts in teamwork used a mixed-methods approach to develop and refine the tool. Study 1 observers rated pre-determined optimal/sub-optimal MDM film excerpts and Study 2 observers independently rated video-recordings of 10 MDMs. Setting Study 2 included 10 cancer MDMs in England. Participants Testing was undertaken by 13 health service staff and a clinical and non-clinical observer. Intervention None. Main Outcome Measures Tool development, validity, reliability/agreement and variability in MDT performance. Results Study 1: Observers were able to discriminate between optimal and sub-optimal MDM performance (P ≤ 0.05). Study 2: Inter-rater reliability was good for 3/10 domains. Percentage of absolute agreement was high (≥80%) for 4/10 domains and percentage agreement within 1 point was high for 9/10 domains. Four MDTs performed well (scored 3+ in at least 8/10 domains), 5 MDTs performed well in 6–7 domains and 1 MDT performed well in only 4 domains. Leadership and chairing of the meeting, the organization and administration of the meeting, and clinical decision-making processes all varied significantly between MDMs (P ≤ 0.01). Conclusions MDT-MOT demonstrated good criterion validity. Agreement between clinical and non-clinical observers (within one point on the scale) was high but this was inconsistent with reliability coefficients and warrants further investigation. If further validated MDT-MOT might provide a useful mechanism for the routine assessment of MDMs by the local workforce to drive improvements in MDT performance. PMID:27084499
Development and testing of the cancer multidisciplinary team meeting observational tool (MDT-MOT).
Harris, Jenny; Taylor, Cath; Sevdalis, Nick; Jalil, Rozh; Green, James S A
2016-06-01
To develop a tool for independent observational assessment of cancer multidisciplinary team meetings (MDMs), and test criterion validity, inter-rater reliability/agreement and describe performance. Clinicians and experts in teamwork used a mixed-methods approach to develop and refine the tool. Study 1 observers rated pre-determined optimal/sub-optimal MDM film excerpts and Study 2 observers independently rated video-recordings of 10 MDMs. Study 2 included 10 cancer MDMs in England. Testing was undertaken by 13 health service staff and a clinical and non-clinical observer. None. Tool development, validity, reliability/agreement and variability in MDT performance. Study 1: Observers were able to discriminate between optimal and sub-optimal MDM performance (P ≤ 0.05). Study 2: Inter-rater reliability was good for 3/10 domains. Percentage of absolute agreement was high (≥80%) for 4/10 domains and percentage agreement within 1 point was high for 9/10 domains. Four MDTs performed well (scored 3+ in at least 8/10 domains), 5 MDTs performed well in 6-7 domains and 1 MDT performed well in only 4 domains. Leadership and chairing of the meeting, the organization and administration of the meeting, and clinical decision-making processes all varied significantly between MDMs (P ≤ 0.01). MDT-MOT demonstrated good criterion validity. Agreement between clinical and non-clinical observers (within one point on the scale) was high but this was inconsistent with reliability coefficients and warrants further investigation. If further validated MDT-MOT might provide a useful mechanism for the routine assessment of MDMs by the local workforce to drive improvements in MDT performance. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
NASA Astrophysics Data System (ADS)
Fiorini, Rodolfo A.; Dacquino, Gianfranco
2005-03-01
GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous, similar approaches are: 1) Progressive Automated Invariant Model Generation, 2) Invariant Minimal Complete Description Set for computational efficiency, 3) Arbitrary Model Precision for robust object description and identification.
Acquiring the optimal time for hyperbaric therapy in the rat model of CFA induced arthritis.
Koo, Sung Tae; Lee, Chang-Hyung; Shin, Yong Il; Ko, Hyun Yoon; Lee, Da Gyo; Jeong, Han-Sol
2014-01-01
We previously published an article about the pressure effect using a rheumatoid animal model. Hyperbaric therapy appears to be beneficial in treating rheumatoid arthritis (RA) by reducing the inflammatory process in an animal model. In this sense, acquiring the optimal pressure-treatment time parameter for RA is important and no optimal hyperbaric therapy time has been suggested up to now. The purpose of our study was to acquire the optimal time for hyperbaric therapy in the RA rat model. Controlled animal study. Following injection of complete Freund's adjuvant (CFA) into one side of the knee joint, 32 rats were randomly assigned to 3 different time groups (1, 3, 5 hours a day) under 1.5 atmospheres absolute (ATA) hyperbaric chamber for 12 days. The pain levels were assessed daily for 2 weeks by weight bearing force (WBF) of the affected limb. In addition, the levels of gelatinase, MMP-2, and MMP-9 expression in the synovial fluids of the knees were analyzed. The reduction of WBF was high at 2 days after injection and then it was spontaneously increased up to 14 days in all 3 groups. There were significant differences of WBF between 5 hours and control during the third through twelfth days, between 3 hours and control during the third through fifth and tenth through twelfth days, and between 3 hours and 5 hours during the third through seventh days (P < 0.05). The MMP-9/MMP-2 ratio increased at 14 days after the CFA injection in all groups compared to the initial findings, however, the 3 hour group showed a smaller MMP-9/MMP-2 ratio than the control group. Although enough samples were used for the study to support our hypothesis, more samples will be needed to raise the validity and reliability. The effect of hyperbaric treatment appears to be dependent upon the elevated therapy time under 1.5 ATA pressure for a short period of time; however, the long-term effects were similar in all pressure groups. Further study will be needed to acquire the optimal pressure-treatment parameter relationship in various conditions for clinical application.
Cao, Qi; Leung, K M
2014-09-22
Reliable computer models for the prediction of chemical biodegradability from molecular descriptors and fingerprints are very important for making health and environmental decisions. Coupling of the differential evolution (DE) algorithm with the support vector classifier (SVC) in order to optimize the main parameters of the classifier resulted in an improved classifier called the DE-SVC, which is introduced in this paper for use in chemical biodegradability studies. The DE-SVC was applied to predict the biodegradation of chemicals on the basis of extensive sample data sets and known structural features of molecules. Our optimization experiments showed that DE can efficiently find the proper parameters of the SVC. The resulting classifier possesses strong robustness and reliability compared with grid search, genetic algorithm, and particle swarm optimization methods. The classification experiments conducted here showed that the DE-SVC exhibits better classification performance than models previously used for such studies. It is a more effective and efficient prediction model for chemical biodegradability.
Decentralized Fuzzy MPC on Spatial Power Control of a Large PHWR
NASA Astrophysics Data System (ADS)
Liu, Xiangjie; Jiang, Di; Lee, Kwang Y.
2016-08-01
Reliable power control for stabilizing the spatial oscillations is quite important for ensuring the safe operation of a modern pressurized heavy water reactor (PHWR), since these spatial oscillations can cause “flux tilting” in the reactor core. In this paper, a decentralized fuzzy model predictive control (DFMPC) is proposed for spatial control of PHWR. Due to the load dependent dynamics of the nuclear power plant, fuzzy modeling is used to approximate the nonlinear process. A fuzzy Lyapunov function and “quasi-min-max” strategy is utilized in designing the DFMPC, to reduce the conservatism. The plant-wide stability is achieved by the asymptotically positive realness constraint (APRC) for this decentralized MPC. The solving optimization problem is based on a receding horizon scheme involving the linear matrix inequalities (LMIs) technique. Through dynamic simulations, it is demonstrated that the designed DFMPC can effectively suppress spatial oscillations developed in PHWR, and further, shows the advantages over the typical parallel distributed compensation (PDC) control scheme.
Capacity and reliability analyses with applications to power quality
NASA Astrophysics Data System (ADS)
Azam, Mohammad; Tu, Fang; Shlapak, Yuri; Kirubarajan, Thiagalingam; Pattipati, Krishna R.; Karanam, Rajaiah
2001-07-01
The deregulation of energy markets, the ongoing advances in communication networks, the proliferation of intelligent metering and protective power devices, and the standardization of software/hardware interfaces are creating a dramatic shift in the way facilities acquire and utilize information about their power usage. The currently available power management systems gather a vast amount of information in the form of power usage, voltages, currents, and their time-dependent waveforms from a variety of devices (for example, circuit breakers, transformers, energy and power quality meters, protective relays, programmable logic controllers, motor control centers). What is lacking is an information processing and decision support infrastructure to harness this voluminous information into usable operational and management knowledge to handle the health of their equipment and power quality, minimize downtime and outages, and to optimize operations to improve productivity. This paper considers the problem of evaluating the capacity and reliability analyses of power systems with very high availability requirements (e.g., systems providing energy to data centers and communication networks with desired availability of up to 0.9999999). The real-time capacity and margin analysis helps operators to plan for additional loads and to schedule repair/replacement activities. The reliability analysis, based on computationally efficient sum of disjoint products, enables analysts to decide the optimum levels of redundancy, aids operators in prioritizing the maintenance options for a given budget and monitoring the system for capacity margin. The resulting analytical and software tool is demonstrated on a sample data center.
Development of a questionnaire to evaluate asthma control in Japanese asthma patients.
Tohda, Yuji; Hozawa, Soichiro; Tanaka, Hiroshi
2018-01-01
The asthma control questionnaires used in Japan are Japanese translations of those developed outside Japan, and have some limitations; a questionnaire designed to optimally evaluate asthma control levels for Japanese may be necessary. The present study was conducted to validate the Japan Asthma Control Survey (JACS) questionnaire in Japanese asthma patients. A total of 226 adult patients with mild to severe persistent asthma were enrolled and responded to the JACS questionnaire, asthma control questionnaire (ACQ), and Mini asthma quality of life questionnaire (Mini AQLQ) at Weeks 0 and 4. The reliability, validity, and sensitivity/responsiveness of the JACS questionnaire were evaluated. The intra-class correlation coefficients (ICCs) were within the range of 0.55-0.75 for all JACS scores, indicating moderate/substantial reproducibility. For internal consistency, Cronbach's alpha coefficients ranged from 0.76 to 0.92 in total and subscale scores, which were greater than the lower limit of internal consistency. As for factor validity, the cumulative contribution ratio of four main factors was 0.66. For criterion-related validity, the correlation coefficients between the JACS total score and ACQ5, ACQ6, and Mini AQLQ scores were -0.78, -0.78, and 0.77, respectively, showing a significant correlation (p < 0.0001). The JACS questionnaire was validated in terms of reliability and validity. It will be necessary to evaluate the therapeutic efficacy measured by the JACS questionnaire and calculate cutoff values for the asthma control status in a higher number of patients. UMIN000016589. Copyright © 2017 Japanese Society of Allergology. Production and hosting by Elsevier B.V. All rights reserved.
A Novel Hybrid Mental Spelling Application Based on Eye Tracking and SSVEP-Based BCI
Stawicki, Piotr; Gembler, Felix; Rezeika, Aya; Volosyak, Ivan
2017-01-01
Steady state visual evoked potentials (SSVEPs)-based Brain-Computer interfaces (BCIs), as well as eyetracking devices, provide a pathway for re-establishing communication for people with severe disabilities. We fused these control techniques into a novel eyetracking/SSVEP hybrid system, which utilizes eye tracking for initial rough selection and the SSVEP technology for fine target activation. Based on our previous studies, only four stimuli were used for the SSVEP aspect, granting sufficient control for most BCI users. As Eye tracking data is not used for activation of letters, false positives due to inappropriate dwell times are avoided. This novel approach combines the high speed of eye tracking systems and the high classification accuracies of low target SSVEP-based BCIs, leading to an optimal combination of both methods. We evaluated accuracy and speed of the proposed hybrid system with a 30-target spelling application implementing all three control approaches (pure eye tracking, SSVEP and the hybrid system) with 32 participants. Although the highest information transfer rates (ITRs) were achieved with pure eye tracking, a considerable amount of subjects was not able to gain sufficient control over the stand-alone eye-tracking device or the pure SSVEP system (78.13% and 75% of the participants reached reliable control, respectively). In this respect, the proposed hybrid was most universal (over 90% of users achieved reliable control), and outperformed the pure SSVEP system in terms of speed and user friendliness. The presented hybrid system might offer communication to a wider range of users in comparison to the standard techniques. PMID:28379187
Meta-Heuristics in Short Scale Construction: Ant Colony Optimization and Genetic Algorithm.
Schroeders, Ulrich; Wilhelm, Oliver; Olaru, Gabriel
2016-01-01
The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored user-defined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function.
Homann, Stefanie; Hofmann, Christian; Gorin, Aleksandr M.; Nguyen, Huy Cong Xuan; Huynh, Diana; Hamid, Phillip; Maithel, Neil; Yacoubian, Vahe; Mu, Wenli; Kossyvakis, Athanasios; Sen Roy, Shubhendu; Yang, Otto Orlean
2017-01-01
Transfection is one of the most frequently used techniques in molecular biology that is also applicable for gene therapy studies in humans. One of the biggest challenges to investigate the protein function and interaction in gene therapy studies is to have reliable monospecific detection reagents, particularly antibodies, for all human gene products. Thus, a reliable method that can optimize transfection efficiency based on not only expression of the target protein of interest but also the uptake of the nucleic acid plasmid, can be an important tool in molecular biology. Here, we present a simple, rapid and robust flow cytometric method that can be used as a tool to optimize transfection efficiency at the single cell level while overcoming limitations of prior established methods that quantify transfection efficiency. By using optimized ratios of transfection reagent and a nucleic acid (DNA or RNA) vector directly labeled with a fluorochrome, this method can be used as a tool to simultaneously quantify cellular toxicity of different transfection reagents, the amount of nucleic acid plasmid that cells have taken up during transfection as well as the amount of the encoded expressed protein. Finally, we demonstrate that this method is reproducible, can be standardized and can reliably and rapidly quantify transfection efficiency, reducing assay costs and increasing throughput while increasing data robustness. PMID:28863132
Meta-Heuristics in Short Scale Construction: Ant Colony Optimization and Genetic Algorithm
Schroeders, Ulrich; Wilhelm, Oliver; Olaru, Gabriel
2016-01-01
The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored user-defined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function. PMID:27893845
Adaptive particle swarm optimization for optimal orbital elements of binary stars
NASA Astrophysics Data System (ADS)
Attia, Abdel-Fattah
2016-12-01
The paper presents an adaptive particle swarm optimization (APSO) as an alternative method to determine the optimal orbital elements of the star η Bootis of MK type G0 IV. The proposed algorithm transforms the problem of finding periodic orbits into the problem of detecting global minimizers as a function, to get a best fit of Keplerian and Phase curves. The experimental results demonstrate that the proposed approach of APSO generally more accurate than the standard particle swarm optimization (PSO) and other published optimization algorithms, in terms of solution accuracy, convergence speed and algorithm reliability.
New Tween-80 microbiological assay of serum folate levels in humans and animals.
Zhou, Zhenghua; Yang, Yuan; Li, Ming; Kou, Chong; Xiao, Ping; Jiang, Yan; Hong, Junrong; Huang, Chengyu
2012-01-01
The objective of this study was to develop a new Tween-80 microbiological assay (Tween-80 MBA) to determine human or animal serum folate levels and to verify its reliability. The effects of the Lactobacillius casei subspecies rhamnosus (L. casei, ATCC No. 7469) inoculum concentration, incubation time, and Tween-80 on L. casei growth were studied, and the serum folate levels were investigated. Serum samples were collected from patients with esophageal cancer (EC) and healthy control subjects in Yanting, healthy adult subjects in Chengdu, Sichuan, and in male Sprague-Dawley rats. Optimal conditions for the new MBA were as follows: 1.28 x 10(7) CFU/mL working inoculum, vitamin folic acid assay broth with 0.24% (w/w) Tween-80, and anaerobic incubation with L. casei at 37 degrees C for 22 h. Under the optimal conditions, the working curve was in simple linear rather than logarithmic equation; the linear working curve of the folic acid standard working solution concentration versus the turbidity (adsorption value) of medium with L. casei ranged from 0.05 to 1.00 microg/L; the linear correlation coefficient was 0.9989 (SD 0.0007); the recovery rate of folate was 105.4-112.7%; and the minimum concentration for detecting folate was 0.03 microg/L. The RSD within-day and between-day precisions were 5.6 and 3.3%, respectively. The serum folate level of 100 EC patients was 6.4 (SEM 0.4) microg/L which was significantly lower than that of healthy control subjects [8.0 (SEM 0.6) microg/L, n = 100, P=0.020]. The new Tween-80 MBA is considered to be a reliable method for measuring serum folate level.
Transonic Wing Shape Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2002-01-01
A method for aerodynamic shape optimization based on a genetic algorithm approach is demonstrated. The algorithm is coupled with a transonic full potential flow solver and is used to optimize the flow about transonic wings including multi-objective solutions that lead to the generation of pareto fronts. The results indicate that the genetic algorithm is easy to implement, flexible in application and extremely reliable.
The Pareidolia Test: A Simple Neuropsychological Test Measuring Visual Hallucination-Like Illusions.
Mamiya, Yasuyuki; Nishio, Yoshiyuki; Watanabe, Hiroyuki; Yokoi, Kayoko; Uchiyama, Makoto; Baba, Toru; Iizuka, Osamu; Kanno, Shigenori; Kamimura, Naoto; Kazui, Hiroaki; Hashimoto, Mamoru; Ikeda, Manabu; Takeshita, Chieko; Shimomura, Tatsuo; Mori, Etsuro
2016-01-01
Visual hallucinations are a core clinical feature of dementia with Lewy bodies (DLB), and this symptom is important in the differential diagnosis and prediction of treatment response. The pareidolia test is a tool that evokes visual hallucination-like illusions, and these illusions may be a surrogate marker of visual hallucinations in DLB. We created a simplified version of the pareidolia test and examined its validity and reliability to establish the clinical utility of this test. The pareidolia test was administered to 52 patients with DLB, 52 patients with Alzheimer's disease (AD) and 20 healthy controls (HCs). We assessed the test-retest/inter-rater reliability using the intra-class correlation coefficient (ICC) and the concurrent validity using the Neuropsychiatric Inventory (NPI) hallucinations score as a reference. A receiver operating characteristic (ROC) analysis was used to evaluate the sensitivity and specificity of the pareidolia test to differentiate DLB from AD and HCs. The pareidolia test required approximately 15 minutes to administer, exhibited good test-retest/inter-rater reliability (ICC of 0.82), and moderately correlated with the NPI hallucinations score (rs = 0.42). Using an optimal cut-off score set according to the ROC analysis, and the pareidolia test differentiated DLB from AD with a sensitivity of 81% and a specificity of 92%. Our study suggests that the simplified version of the pareidolia test is a valid and reliable surrogate marker of visual hallucinations in DLB.
Microgrid Controller and Advanced Distribution Management System Survey Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Starke, Michael R.; Herron, Andrew N.
2016-07-01
A microgrid controller, which serves as the heart of a microgrid, is responsible for optimally managing the distributed energy resources, energy storage systems, and responsive demand and for ensuring the microgrid is being operated in an efficient, reliable, and resilient way. As the market for microgrids has blossomed in recently years, many vendors have released their own microgrid controllers to meet the various needs of different microgrid clients. However, due to the absence of a recognized standard for such controllers, vendor-supported microgrid controllers have a range of functionalities that are significantly different from each other in many respects. As amore » result the current state of the industry has been difficult to assess. To remedy this situation the authors conducted a survey of the functions of microgrid controllers developed by vendors and national laboratories. This report presents a clear indication of the state of the microgrid-controller industry based on analysis of the survey results. The results demonstrate that US Department of Energy funded research in microgrid controllers is unique and not competing with that of industry.« less
Sensor Needs for Control and Health Management of Intelligent Aircraft Engines
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Gang, Sanjay; Hunter, Gary W.; Guo, Ten-Huei; Semega, Kenneth J.
2004-01-01
NASA and the U.S. Department of Defense are conducting programs which support the future vision of "intelligent" aircraft engines for enhancing the affordability, performance, operability, safety, and reliability of aircraft propulsion systems. Intelligent engines will have advanced control and health management capabilities enabling these engines to be self-diagnostic, self-prognostic, and adaptive to optimize performance based upon the current condition of the engine or the current mission of the vehicle. Sensors are a critical technology necessary to enable the intelligent engine vision as they are relied upon to accurately collect the data required for engine control and health management. This paper reviews the anticipated sensor requirements to support the future vision of intelligent engines from a control and health management perspective. Propulsion control and health management technologies are discussed in the broad areas of active component controls, propulsion health management and distributed controls. In each of these three areas individual technologies will be described, input parameters necessary for control feedback or health management will be discussed, and sensor performance specifications for measuring these parameters will be summarized.
NASA Astrophysics Data System (ADS)
Kiani, Morgan Mozhgan
Inherent difficulties in management of electric power in the presence of an increasing demand for more energy, non-conventional loads such as digital appliances, and non-sustainable imported fossil fuels has initiated a multi-folded effort by many countries to restructure the way electric energy is generated, dispatched, and consumed. Smart power grid is the manifestation of many technologies that would eventually transforms the existing power grid into a more flexible, fault resilient, and intelligent system. Integration of distributed renewable energy sources plays a central role in successful implementation of this transformation. Among the renewable options, wind energy harvesting offers superior engineering and economical incentives with minimal environmental impacts. Doubly fed induction generators (DFIG) have turned into a serious contender for wind energy generators due to their flexibility in control of active and reactive power with minimal silicon loss. Significant presence of voltage unbalance and system harmonics in finite inertia transmission lines can potentially undermine the reliability of these wind generators. The present dissertation has investigated the impacts of system unbalances and harmonics on the performance of the DFIG. Our investigation indicates that these effects can result in an undesirable undulation in the rotor shaft which can potentially invoke mechanical resonance, thereby causing catastrophic damages to the installations and the power grid. In order to remedy the above issue, a control solution for real time monitoring of the system unbalance and optimal excitation of the three phase rotor currents in a DFIG is offered. The optimal rotor currents will create appropriate components of the magneto-motive force in the airgap that will actively compensate the undesirable magnetic field originated by the stator windings. Due to the iterative nature of the optimization procedure, field reconstruction method has been incorporated. Field reconstruction method provides high precision results at a considerably faster pace as compared to finite element method. Our results indicate that by just-in-time detection of the system unbalance and employment of the optimal rotor currents damaging torque pulsation can be effectively eliminated. The side effects of the proposed method in changing the core, copper, and silicon losses are minor and well justified when reliability of the wind generation units are considered.
Effects of back posture education on elementary schoolchildren's back function.
Geldhof, Elisabeth; Cardon, Greet; De Bourdeaudhuij, Ilse; Danneels, Lieven; Coorevits, Pascal; Vanderstraeten, Guy; De Clercq, Dirk
2007-06-01
The possible effects of back education on children's back function were never evaluated. Therefore, main aim of the present study was to evaluate the effects of back education in elementary schoolchildren on back function parameters. Since the reliability of back function measurement in children is poorly defined, another objective was to test the selected instruments for reliability in 8-11-year olds. The multi-factorial intervention lasting two school-years consisted of a back education program and the stimulation of postural dynamism in the class. Trunk muscle endurance, leg muscle capacity and spinal curvature were evaluated in a pre-post design including 41 children who received the back education program (mean age at post-test: 11.2 +/- 0.9 years) and 28 controls (mean age at post-test: 11.4 +/- 0.6 years). Besides, test-retest reliability with a 1-week interval was investigated in a separate sample. Therefore, 47 children (mean age: 10.1 +/- 0.5 years) were tested for reliability of trunk muscle endurance and 40 children (mean age: 10.2 +/- 0.7 years) for the assessment of spinal curvatures. Reliability of endurance testing was very good to good for the trunk flexors (ICC = 0.82) and trunk extensors (ICC = 0.63). The assessment of the thoracic (ICC = 0.69) and the lumbar curvature (ICC = 0.52) in seating position showed good to acceptable reliability. Low ICCs were found for the assessment of the thoracic (ICC = 0.39) and the lumbar curvature (ICC = 0.37) in stance. The effects of 2 year back education showed an increase in trunk flexor endurance in the intervention group compared to a decrease in the controls and a trend towards significance for a higher increase in trunk extensor endurance in the intervention group. For leg muscle capacity and spinal curvature no intervention effects were found. The small samples recommend cautious interpretation of intervention effects. However, the present study's findings favor the implementation of back education with focus on postural dynamism in the class as an integral part of the elementary school curriculum in the scope of optimizing spinal loading through the school environment.
De Filippis, Luigi Alberto Ciro; Serio, Livia Maria; Facchini, Francesco; Mummolo, Giovanni; Ludovico, Antonio Domenico
2016-01-01
A simulation model was developed for the monitoring, controlling and optimization of the Friction Stir Welding (FSW) process. This approach, using the FSW technique, allows identifying the correlation between the process parameters (input variable) and the mechanical properties (output responses) of the welded AA5754 H111 aluminum plates. The optimization of technological parameters is a basic requirement for increasing the seam quality, since it promotes a stable and defect-free process. Both the tool rotation and the travel speed, the position of the samples extracted from the weld bead and the thermal data, detected with thermographic techniques for on-line control of the joints, were varied to build the experimental plans. The quality of joints was evaluated through destructive and non-destructive tests (visual tests, macro graphic analysis, tensile tests, indentation Vickers hardness tests and t thermographic controls). The simulation model was based on the adoption of the Artificial Neural Networks (ANNs) characterized by back-propagation learning algorithm with different types of architecture, which were able to predict with good reliability the FSW process parameters for the welding of the AA5754 H111 aluminum plates in Butt-Joint configuration. PMID:28774035
NASA Astrophysics Data System (ADS)
Moneta, Diana; Mora, Paolo; Viganò, Giacomo; Alimonti, Gianluca
2014-12-01
The diffusion of Distributed Generation (DG) based on Renewable Energy Sources (RES) requires new strategies to ensure reliable and economic operation of the distribution networks and to support the diffusion of DG itself. An advanced algorithm (DISCoVER - DIStribution Company VoltagE Regulator) is being developed to optimize the operation of active network by means of an advanced voltage control based on several regulations. Starting from forecasted load and generation, real on-field measurements, technical constraints and costs for each resource, the algorithm generates for each time period a set of commands for controllable resources that guarantees achievement of technical goals minimizing the overall cost. Before integrating the controller into the telecontrol system of the real networks, and in order to validate the proper behaviour of the algorithm and to identify possible critical conditions, a complete simulation phase has started. The first step is concerning the definition of a wide range of "case studies", that are the combination of network topology, technical constraints and targets, load and generation profiles and "costs" of resources that define a valid context to test the algorithm, with particular focus on battery and RES management. First results achieved from simulation activity on test networks (based on real MV grids) and actual battery characteristics are given, together with prospective performance on real case applications.
Design and control of the precise tracking bed based on complex electromechanical design theory
NASA Astrophysics Data System (ADS)
Ren, Changzhi; Liu, Zhao; Wu, Liao; Chen, Ken
2010-05-01
The precise tracking technology is wide used in astronomical instruments, satellite tracking and aeronautic test bed. However, the precise ultra low speed tracking drive system is one high integrated electromechanical system, which one complexly electromechanical design method is adopted to improve the efficiency, reliability and quality of the system during the design and manufacture circle. The precise Tracking Bed is one ultra-exact, ultra-low speed, high precision and huge inertial instrument, which some kind of mechanism and environment of the ultra low speed is different from general technology. This paper explores the design process based on complex electromechanical optimizing design theory, one non-PID with a CMAC forward feedback control method is used in the servo system of the precise tracking bed and some simulation results are discussed.
Unsolved Problems of Intracellular Noise
NASA Astrophysics Data System (ADS)
Paulsson, Johan
2003-05-01
Many molecules are present at so low numbers per cell that significant fluctuations arise spontaneously. Such `noise' can randomize developmental pathways, disrupt cell cycle control or force metabolites away from their optimal levels. It can also be exploited for non-genetic individuality or, surprisingly, for more reliable and deterministic control. However, in spite of the mechanistic and evolutionary significance of noise, both explicit modeling and implicit verbal reasoning in molecular biology are completely dominated by macroscopic kinetics. Here I discuss some particularly under-addressed issues of noise in genetic and metabolic networks: 1) relations between systematic macro- and mesoscopic approaches; 2) order and disorder in gene expression; 3) autorepression for checking fluctuations; 4) noise suppression by noise; 5) phase-transitions in metabolic systems; 6) effects of cell growth and division; and 7) mono- and bistable bimodal switches.
A maintenance model for k-out-of-n subsystems aboard a fleet of advanced commercial aircraft
NASA Technical Reports Server (NTRS)
Miller, D. R.
1978-01-01
Proposed highly reliable fault-tolerant reconfigurable digital control systems for a future generation of commercial aircraft consist of several k-out-of-n subsystems. Each of these flight-critical subsystems will consist of n identical components, k of which must be functioning properly in order for the aircraft to be dispatched. Failed components are recoverable; they are repaired in a shop. Spares are inventoried at a main base where they may be substituted for failed components on planes during layovers. Penalties are assessed when failure of a k-out-of-n subsystem causes a dispatch cancellation or delay. A maintenance model for a fleet of aircraft with such control systems is presented. The goals are to demonstrate economic feasibility and to optimize.
Packaging Technology for SiC High Temperature Circuits Operable up to 500 Degrees Centigrade
NASA Technical Reports Server (NTRS)
Chen, Lian-Yu
2002-01-01
New high temperature low power 8-pin packages have been fabricated using commercial fabrication service. These packages are made of aluminum nitride and 96 percent alumina with Au metallization. The new design of these packages provides the chips inside with EM shielding. Wirebond geometry control has been achieved for precise mechanical tests. Au wirebond samples with 45 degree heel-angle have been tested using wireloop test module. The geometry control improves the consistency of measurement of the wireloop breaking point.Also reported on is a parametric study of the thermomechanical reliability of a Au thick-film based SiC die-attach assembly using nonlinear finite element analysis (FEA) was conducted to optimize the die-attach thermo-mechanical performance for operation at temperatures from room temperature to 500 degrees Centigrade. This parametric study centered on material selection, structure design and process control.
Thermal control systems for low-temperature heat rejection on a lunar base
NASA Technical Reports Server (NTRS)
Sridhar, K. R.; Gottmann, Matthias; Nanjundan, Ashok
1993-01-01
One of the important issues in the design of a lunar base is the thermal control system (TCS) used to reject low-temperature heat from the base. The TCS ensures that the base and the components inside are maintained within an acceptable temperature range. The temperature of the lunar surface peaks at 400 K during the 336-hour lunar day. Under these circumstances, direct dissipation of waste heat from the lunar base using passive radiators would be impractical. Thermal control systems based on thermal storage, shaded radiators, and heat pumps have been proposed. Based on proven technology, innovation, realistic complexity, reliability, and near-term applicability, a heat pump-based TCS was selected as a candidate for early missions. In this report, Rankine-cycle heat pumps and absorption heat pumps (ammonia water and lithium bromide-water) have been analyzed and optimized for a lunar base cooling load of 100 kW.
Use of a quality trait index to increase the reliability of phenotypic evaluations in broccoli
USDA-ARS?s Scientific Manuscript database
Selection of superior broccoli hybrids involves multiple considerations, including optimization of head quality traits. Quality assessment of broccoli heads is often confounded by relatively subjective human preferences for optimal appearance of heads. To assist the selection process, we assessed fi...
NASA Astrophysics Data System (ADS)
Biryuk, V. V.; Tsapkova, A. B.; Larin, E. A.; Livshiz, M. Y.; Sheludko, L. P.
2018-01-01
A set of mathematical models for calculating the reliability indexes of structurally complex multifunctional combined installations in heat and power supply systems was developed. Reliability of energy supply is considered as required condition for the creation and operation of heat and power supply systems. The optimal value of the power supply system coefficient F is based on an economic assessment of the consumers’ loss caused by the under-supply of electric power and additional system expences for the creation and operation of an emergency capacity reserve. Rationing of RI of the industrial heat supply is based on the use of concept of technological margin of safety of technological processes. The definition of rationed RI values of heat supply of communal consumers is based on the air temperature level iside the heated premises. The complex allows solving a number of practical tasks for providing reliability of heat supply for consumers. A probabilistic model is developed for calculating the reliability indexes of combined multipurpose heat and power plants in heat-and-power supply systems. The complex of models and calculation programs can be used to solve a wide range of specific tasks of optimization of schemes and parameters of combined heat and power plants and systems, as well as determining the efficiency of various redundance methods to ensure specified reliability of power supply.
Silver-free Metallization Technology for Producing High Efficiency, Industrial Silicon Solar Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michaelson, Lynne M.; Munoz, Krystal; Karas, Joseph
The goal of this project is to provide a commercially viable Ag-free metallization technology that will both reduce cost and increase efficiency of standard silicon solar cells. By removing silver from the front grid metallization and replacing it with lower cost nickel, copper, and tin metal, the front grid direct materials costs will decrease. This reduction in material costs should provide a path to meeting the Sunshot 2020 goal of 1 dollar / W DC. As of today, plated contacts are not widely implemented in large scale manufacturing. For organizations that wish to implement pilot scale manufacturing, only two equipmentmore » choices exist. These equipment manufacturers do not supply plating chemistry. The main goal of this project is to provide a chemistry and equipment solution to the industry that enables reliable manufacturing of plated contacts marked by passing reliability results and higher efficiencies than silver paste front grid contacts. To date, there have been several key findings that point to plated contacts performing equal to or better than the current state of the art silver paste contacts. Poor adhesion and reliability concerns are a few of the hurdles for plated contacts, specifically plated nickel directly on silicon. A key finding of the Phase 1 budget period is that the plated contacts have the same adhesion as the silver paste controls. This is a huge win for plated contacts. With very little optimization work, state of the art electrical results for plated contacts on laser ablated lines have been demonstrated with efficiencies up to 19.1% and fill factors ~80% on grid lines 40-50 um wide. The silver paste controls with similar line widths demonstrate similar electrical results. By optimizing the emitter and grid design for the plated contacts, it is expected that the electrical performance will exceed the silver paste controls. In addition, cells plated using Technic chemistry and equipment pass reliability testing; i.e. 1000 hours damp heat and 200 thermal cycles, with results similar to silver paste control cells. 100 cells have been processed through Technic’s novel demo plating tool built and installed during budget period 2. This plating tool performed consistently from cell to cell, providing gentle handling for the solar cells. An agreement has been signed with a cell manufacturer to process their cells through our plating chemistry and equipment. Their main focus for plated contacts is to reduce the direct materials cost by utilizing nickel, copper, and tin in place of silver paste. Based on current market conditions and cost model calculations, the overall savings offered by plated contacts is only 3.5% dollar/W versus silver paste contacts; however, the direct materials savings depend on the silver market. If silver prices increase, plated contacts may find a wider adoption in the solar industry in order to keep the direct materials costs down for front grid contacts.« less
Difficult Decisions Made Easier
NASA Technical Reports Server (NTRS)
2006-01-01
NASA missions are extremely complex and prone to sudden, catastrophic failure if equipment falters or if an unforeseen event occurs. For these reasons, NASA trains to expect the unexpected. It tests its equipment and systems in extreme conditions, and it develops risk-analysis tests to foresee any possible problems. The Space Agency recently worked with an industry partner to develop reliability analysis software capable of modeling complex, highly dynamic systems, taking into account variations in input parameters and the evolution of the system over the course of a mission. The goal of this research was multifold. It included performance and risk analyses of complex, multiphase missions, like the insertion of the Mars Reconnaissance Orbiter; reliability analyses of systems with redundant and/or repairable components; optimization analyses of system configurations with respect to cost and reliability; and sensitivity analyses to identify optimal areas for uncertainty reduction or performance enhancement.
ERIC Educational Resources Information Center
Meyer, J. Patrick; Liu, Xiang; Mashburn, Andrew J.
2014-01-01
Researchers often use generalizability theory to estimate relative error variance and reliability in teaching observation measures. They also use it to plan future studies and design the best possible measurement procedures. However, designing the best possible measurement procedure comes at a cost, and researchers must stay within their budget…
NASA Astrophysics Data System (ADS)
Bieniek, T.; Janczyk, G.; Dobrowolski, R.; Wojciechowska, K.; Malinowska, A.; Panas, A.; Nieprzecki, M.; Kłos, H.
2016-11-01
This paper covers research results on development of the cantilevers beams test structures for interconnects reliability and robustness investigation. Presented results include design, modelling, simulation, optimization and finally fabrication stage performed on 4 inch Si wafers using the ITE microfabrication facility. This paper also covers experimental results from the test structures characterization.
NASA Astrophysics Data System (ADS)
Tamura, Yoshinobu; Yamada, Shigeru
OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.
Multiple objective optimization in reliability demonstration test
Lu, Lu; Anderson-Cook, Christine Michaela; Li, Mingyang
2016-10-01
Reliability demonstration tests are usually performed in product design or validation processes to demonstrate whether a product meets specified requirements on reliability. For binomial demonstration tests, the zero-failure test has been most commonly used due to its simplicity and use of minimum sample size to achieve an acceptable consumer’s risk level. However, this test can often result in unacceptably high risk for producers as well as a low probability of passing the test even when the product has good reliability. This paper explicitly explores the interrelationship between multiple objectives that are commonly of interest when planning a demonstration test andmore » proposes structured decision-making procedures using a Pareto front approach for selecting an optimal test plan based on simultaneously balancing multiple criteria. Different strategies are suggested for scenarios with different user priorities and graphical tools are developed to help quantify the trade-offs between choices and to facilitate informed decision making. As a result, potential impacts of some subjective user inputs on the final decision are studied to offer insights and useful guidance for general applications.« less
System Analysis and Performance Benefits of an Optimized Rotorcraft Propulsion System
NASA Technical Reports Server (NTRS)
Bruckner, Robert J.
2007-01-01
The propulsion system of rotorcraft vehicles is the most critical system to the vehicle in terms of safety and performance. The propulsion system must provide both vertical lift and forward flight propulsion during the entire mission. Whereas propulsion is a critical element for all flight vehicles, it is particularly critical for rotorcraft due to their limited safe, un-powered landing capability. This unparalleled reliability requirement has led rotorcraft power plants down a certain evolutionary path in which the system looks and performs quite similarly to those of the 1960 s. By and large the advancements in rotorcraft propulsion have come in terms of safety and reliability and not in terms of performance. The concept of the optimized propulsion system is a means by which both reliability and performance can be improved for rotorcraft vehicles. The optimized rotorcraft propulsion system which couples an oil-free turboshaft engine to a highly loaded gearbox that provides axial load support for the power turbine can be designed with current laboratory proven technology. Such a system can provide up to 60% weight reduction of the propulsion system of rotorcraft vehicles. Several technical challenges are apparent at the conceptual design level and should be addressed with current research.
A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes
2011-01-01
Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Conclusions Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views. PMID:21251284
A 3D freehand ultrasound system for multi-view reconstructions from sparse 2D scanning planes.
Yu, Honggang; Pattichis, Marios S; Agurto, Carla; Beth Goens, M
2011-01-20
A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes.For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views.
Large-scale-system effectiveness analysis. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Foster, J.W.
1979-11-01
Objective of the research project has been the investigation and development of methods for calculating system reliability indices which have absolute, and measurable, significance to consumers. Such indices are a necessary prerequisite to any scheme for system optimization which includes the economic consequences of consumer service interruptions. A further area of investigation has been joint consideration of generation and transmission in reliability studies. Methods for finding or estimating the probability distributions of some measures of reliability performance have been developed. The application of modern Monte Carlo simulation methods to compute reliability indices in generating systems has been studied.
Optimization of dose and image quality in adult and pediatric computed tomography scans
NASA Astrophysics Data System (ADS)
Chang, Kwo-Ping; Hsu, Tzu-Kun; Lin, Wei-Ting; Hsu, Wen-Lin
2017-11-01
Exploration to maximize CT image and reduce radiation dose was conducted while controlling for multiple factors. The kVp, mAs, and iteration reconstruction (IR), affect the CT image quality and radiation dose absorbed. The optimal protocols (kVp, mAs, IR) are derived by figure of merit (FOM) based on CT image quality (CNR) and CT dose index (CTDIvol). CT image quality metrics such as CT number accuracy, SNR, low contrast materials' CNR and line pair resolution were also analyzed as auxiliary assessments. CT protocols were carried out with an ACR accreditation phantom and a five-year-old pediatric head phantom. The threshold values of the adult CT scan parameters, 100 kVp and 150 mAs, were determined from the CT number test and line pairs in ACR phantom module 1and module 4 respectively. The findings of this study suggest that the optimal scanning parameters for adults be set at 100 kVp and 150-250 mAs. However, for improved low- contrast resolution, 120 kVp and 150-250 mAs are optimal. Optimal settings for pediatric head CT scan were 80 kVp/50 mAs, for maxillary sinus and brain stem, while 80 kVp /300 mAs for temporal bone. SNR is not reliable as the independent image parameter nor the metric for determining optimal CT scan parameters. The iteration reconstruction (IR) approach is strongly recommended for both adult and pediatric CT scanning as it markedly improves image quality without affecting radiation dose.
Takahashi, Chie; Watt, Simon J.
2014-01-01
When we hold an object while looking at it, estimates from visual and haptic cues to size are combined in a statistically optimal fashion, whereby the “weight” given to each signal reflects their relative reliabilities. This allows object properties to be estimated more precisely than would otherwise be possible. Tools such as pliers and tongs systematically perturb the mapping between object size and the hand opening. This could complicate visual-haptic integration because it may alter the reliability of the haptic signal, thereby disrupting the determination of appropriate signal weights. To investigate this we first measured the reliability of haptic size estimates made with virtual pliers-like tools (created using a stereoscopic display and force-feedback robots) with different “gains” between hand opening and object size. Haptic reliability in tool use was straightforwardly determined by a combination of sensitivity to changes in hand opening and the effects of tool geometry. The precise pattern of sensitivity to hand opening, which violated Weber's law, meant that haptic reliability changed with tool gain. We then examined whether the visuo-motor system accounts for these reliability changes. We measured the weight given to visual and haptic stimuli when both were available, again with different tool gains, by measuring the perceived size of stimuli in which visual and haptic sizes were varied independently. The weight given to each sensory cue changed with tool gain in a manner that closely resembled the predictions of optimal sensory integration. The results are consistent with the idea that different tool geometries are modeled by the brain, allowing it to calculate not only the distal properties of objects felt with tools, but also the certainty with which those properties are known. These findings highlight the flexibility of human sensory integration and tool-use, and potentially provide an approach for optimizing the design of visual-haptic devices. PMID:24592245
NASA Astrophysics Data System (ADS)
Dharmalingam, Gnanaprakash; Carpenter, Michael A.
2015-05-01
Monitoring polluting gases such as CO and NOx emitted from gas turbines in power plants and aircraft is important, in order to both reduce the effects of such gases on the environment as well as to optimize the performance of the respective power system. Fuel cost savings as well as a reduced environmental impact can be realized if air traffic utilized next generation jet turbines with an emission/performance control sensing system. These monitoring systems must be sensitive and selective to gases as well as be reliable and stable under harsh environmental conditions where the operation temperatures are in excess of 500 °C within a highly reactive environment. In this work, plasmonics based chemical sensors with nanocomposites of a combination of gold nano particles and Yttria Stabilized Zirconia (YSZ) has enabled the sensitive (PPM) and stable detection (100s of hrs.) of H2, NO2 and CO at temperatures of 500 °C. Selectivity remains a challenging parameter to optimize and a layer by layer sputter deposition approach has been recently demonstrated to modify the resulting sensing properties through a change in the morphology of the deposited films. It is expected that further enhancements would be realized through control of the shape and geometry of the catalytically active Au nanoparticles. This level of control has been realized through the use of electron beam lithography to fabricate nanocomposite arrays. Sensing results towards the detection of H2 will be highlighted with specific concerns related to optimization of these nanorod arrays detailed.
Self-balancing dynamic scheduling of electrical energy for energy-intensive enterprises
NASA Astrophysics Data System (ADS)
Gao, Yunlong; Gao, Feng; Zhai, Qiaozhu; Guan, Xiaohong
2013-06-01
Balancing production and consumption with self-generation capacity in energy-intensive enterprises has huge economic and environmental benefits. However, balancing production and consumption with self-generation capacity is a challenging task since the energy production and consumption must be balanced in real time with the criteria specified by power grid. In this article, a mathematical model for minimising the production cost with exactly realisable energy delivery schedule is formulated. And a dynamic programming (DP)-based self-balancing dynamic scheduling algorithm is developed to obtain the complete solution set for such a multiple optimal solutions problem. For each stage, a set of conditions are established to determine whether a feasible control trajectory exists. The state space under these conditions is partitioned into subsets and each subset is viewed as an aggregate state, the cost-to-go function is then expressed as a function of initial and terminal generation levels of each stage and is proved to be a staircase function with finite steps. This avoids the calculation of the cost-to-go of every state to resolve the issue of dimensionality in DP algorithm. In the backward sweep process of the algorithm, an optimal policy is determined to maximise the realisability of energy delivery schedule across the entire time horizon. And then in the forward sweep process, the feasible region of the optimal policy with the initial and terminal state at each stage is identified. Different feasible control trajectories can be identified based on the region; therefore, optimising for the feasible control trajectory is performed based on the region with economic and reliability objectives taken into account.
Trends in modern system theory
NASA Technical Reports Server (NTRS)
Athans, M.
1976-01-01
The topics considered are related to linear control system design, adaptive control, failure detection, control under failure, system reliability, and large-scale systems and decentralized control. It is pointed out that the design of a linear feedback control system which regulates a process about a desirable set point or steady-state condition in the presence of disturbances is a very important problem. The linearized dynamics of the process are used for design purposes. The typical linear-quadratic design involving the solution of the optimal control problem of a linear time-invariant system with respect to a quadratic performance criterion is considered along with gain reduction theorems and the multivariable phase margin theorem. The stumbling block in many adaptive design methodologies is associated with the amount of real time computation which is necessary. Attention is also given to the desperate need to develop good theories for large-scale systems, the beginning of a microprocessor revolution, the translation of the Wiener-Hopf theory into the time domain, and advances made in dynamic team theory, dynamic stochastic games, and finite memory stochastic control.
Chien, Shih-Hsiang; Dzombak, David A.; Vidic, Radisav D.
2013-01-01
Abstract Recent studies have shown that treated municipal wastewater can be a reliable cooling water alternative to fresh water. However, elevated nutrient concentration and microbial population in wastewater lead to aggressive biological proliferation in the cooling system. Three chlorine-based biocides were evaluated for the control of biological growth in cooling systems using tertiary treated wastewater as makeup, based on their biocidal efficiency and cost-effectiveness. Optimal chemical regimens for achieving successful biological growth control were elucidated based on batch-, bench-, and pilot-scale experiments. Biocide usage and biological activity in planktonic and sessile phases were carefully monitored to understand biological growth potential and biocidal efficiency of the three disinfectants in this particular environment. Water parameters, such as temperature, cycles of concentration, and ammonia concentration in recirculating water, critically affected the biocide performance in recirculating cooling systems. Bench-scale recirculating tests were shown to adequately predict the biocide residual required for a pilot-scale cooling system. Optimal residuals needed for proper biological growth control were 1, 2–3, and 0.5–1 mg/L as Cl2 for NaOCl, preformed NH2Cl, and ClO2, respectively. Pilot-scale tests also revealed that Legionella pneumophila was absent from these cooling systems when using the disinfectants evaluated in this study. Cost analysis showed that NaOCl is the most cost-effective for controlling biological growth in power plant recirculating cooling systems using tertiary-treated wastewater as makeup. PMID:23781129
Chien, Shih-Hsiang; Dzombak, David A; Vidic, Radisav D
2013-06-01
Recent studies have shown that treated municipal wastewater can be a reliable cooling water alternative to fresh water. However, elevated nutrient concentration and microbial population in wastewater lead to aggressive biological proliferation in the cooling system. Three chlorine-based biocides were evaluated for the control of biological growth in cooling systems using tertiary treated wastewater as makeup, based on their biocidal efficiency and cost-effectiveness. Optimal chemical regimens for achieving successful biological growth control were elucidated based on batch-, bench-, and pilot-scale experiments. Biocide usage and biological activity in planktonic and sessile phases were carefully monitored to understand biological growth potential and biocidal efficiency of the three disinfectants in this particular environment. Water parameters, such as temperature, cycles of concentration, and ammonia concentration in recirculating water, critically affected the biocide performance in recirculating cooling systems. Bench-scale recirculating tests were shown to adequately predict the biocide residual required for a pilot-scale cooling system. Optimal residuals needed for proper biological growth control were 1, 2-3, and 0.5-1 mg/L as Cl 2 for NaOCl, preformed NH 2 Cl, and ClO 2 , respectively. Pilot-scale tests also revealed that Legionella pneumophila was absent from these cooling systems when using the disinfectants evaluated in this study. Cost analysis showed that NaOCl is the most cost-effective for controlling biological growth in power plant recirculating cooling systems using tertiary-treated wastewater as makeup.
Distributed autonomous systems: resource management, planning, and control algorithms
NASA Astrophysics Data System (ADS)
Smith, James F., III; Nguyen, ThanhVu H.
2005-05-01
Distributed autonomous systems, i.e., systems that have separated distributed components, each of which, exhibit some degree of autonomy are increasingly providing solutions to naval and other DoD problems. Recently developed control, planning and resource allocation algorithms for two types of distributed autonomous systems will be discussed. The first distributed autonomous system (DAS) to be discussed consists of a collection of unmanned aerial vehicles (UAVs) that are under fuzzy logic control. The UAVs fly and conduct meteorological sampling in a coordinated fashion determined by their fuzzy logic controllers to determine the atmospheric index of refraction. Once in flight no human intervention is required. A fuzzy planning algorithm determines the optimal trajectory, sampling rate and pattern for the UAVs and an interferometer platform while taking into account risk, reliability, priority for sampling in certain regions, fuel limitations, mission cost, and related uncertainties. The real-time fuzzy control algorithm running on each UAV will give the UAV limited autonomy allowing it to change course immediately without consulting with any commander, request other UAVs to help it, alter its sampling pattern and rate when observing interesting phenomena, or to terminate the mission and return to base. The algorithms developed will be compared to a resource manager (RM) developed for another DAS problem related to electronic attack (EA). This RM is based on fuzzy logic and optimized by evolutionary algorithms. It allows a group of dissimilar platforms to use EA resources distributed throughout the group. For both DAS types significant theoretical and simulation results will be presented.
Confidence bands for measured economically optimal nitrogen rates
USDA-ARS?s Scientific Manuscript database
While numerous researchers have computed economically optimal N rate (EONR) values from measured yield – N rate data, nearly all have neglected to compute or estimate the statistical reliability of these EONR values. In this study, a simple method for computing EONR and its confidence bands is descr...
NASA Astrophysics Data System (ADS)
Ivashkin, V. V.; Krylov, I. V.
2015-09-01
A method to optimize the flight trajectories to the asteroid Apophis that allows reliably to form a set of Pontryagin extremals for various boundary conditions of the flight, as well as effectively to search for a global problem optimum amongst its elements, is developed.
Minimum Control Requirements for Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Boulange, Richard; Jones, Harry; Jones, Harry
2002-01-01
Advanced control technologies are not necessary for the safe, reliable and continuous operation of Advanced Life Support (ALS) systems. ALS systems can and are adequately controlled by simple, reliable, low-level methodologies and algorithms. The automation provided by advanced control technologies is claimed to decrease system mass and necessary crew time by reducing buffer size and minimizing crew involvement. In truth, these approaches increase control system complexity without clearly demonstrating an increase in reliability across the ALS system. Unless these systems are as reliable as the hardware they control, there is no savings to be had. A baseline ALS system is presented with the minimal control system required for its continuous safe reliable operation. This baseline control system uses simple algorithms and scheduling methodologies and relies on human intervention only in the event of failure of the redundant backup equipment. This ALS system architecture is designed for reliable operation, with minimal components and minimal control system complexity. The fundamental design precept followed is "If it isn't there, it can't fail".
Multi-modal myocontrol: Testing combined force- and electromyography.
Nowak, Markus; Eiband, Thomas; Castellini, Claudio
2017-07-01
Myocontrol, that is control of prostheses using bodily signals, has proved in the decades to be a surprisingly hard problem for the scientific community of assistive and rehabilitation robotics. In particular, traditional surface electromyography (sEMG) seems to be no longer enough to guarantee dexterity (i.e., control over several degrees of freedom) and, most importantly, reliability. Multi-modal myocontrol is concerned with the idea of using novel signal gathering techniques as a replacement of, or alongside, sEMG, to provide high-density and diverse signals to improve dexterity and make the control more reliable. In this paper we present an offline and online assessment of multi-modal sEMG and force myography (FMG) targeted at hand and wrist myocontrol. A total number of twenty sEMG and FMG sensors were used simultaneously, in several combined configurations, to predict opening/closing of the hand and activation of two degrees of freedom of the wrist of ten intact subjects. The analysis was targeted at determining the optimal sensor combination and control parameters; the experimental results indicate that sEMG sensors alone perform worst, yielding a nRMSE of 9.1%, while mixing FMG and sEMG or using FMG only reduces the nRMSE to 5.2-6.6%. To validate these results, we engaged the subject with median performance in an online goal-reaching task. Analysis of this further experiment reveals that the online behaviour is similar to the offline one.
NASA Astrophysics Data System (ADS)
Yu, Long; Xu, Juanjuan; Zhang, Lifang; Xu, Xiaogang
2018-03-01
Based on stress-strength interference theory to establish the reliability mathematical model for high temperature and high pressure multi-stage decompression control valve (HMDCV), and introduced to the temperature correction coefficient for revising material fatigue limit at high temperature. Reliability of key dangerous components and fatigue sensitivity curve of each component are calculated and analyzed by the means, which are analyzed the fatigue life of control valve and combined with reliability theory of control valve model. The impact proportion of each component on the control valve system fatigue failure was obtained. The results is shown that temperature correction factor makes the theoretical calculations of reliability more accurate, prediction life expectancy of main pressure parts accords with the technical requirements, and valve body and the sleeve have obvious influence on control system reliability, the stress concentration in key part of control valve can be reduced in the design process by improving structure.
Creation of Power Reserves Under the Market Economy Conditions
NASA Astrophysics Data System (ADS)
Mahnitko, A.; Gerhards, J.; Lomane, T.; Ribakov, S.
2008-09-01
The main task of the control over an electric power system (EPS) is to ensure reliable power supply at the least cost. In this case, requirements to the electric power quality, power supply reliability and cost limitations on the energy resources must be observed. The available power reserve in an EPS is the necessary condition to keep it in operation with maintenance of normal operating variables (frequency, node voltage, power flows via the transmission lines, etc.). The authors examine possibilities to create power reserves that could be offered for sale by the electric power producer. They consider a procedure of price formation for the power reserves and propose a relevant mathematical model for a united EPS, the initial data being the fuel-cost functions for individual systems, technological limitations on the active power generation and consumers' load. As the criterion of optimization the maximum profit for the producer is taken. The model is exemplified by a concentrated EPS. The computations have been performed using the MATLAB program.
Cork, Randy D.; Detmer, William M.; Friedman, Charles P.
1998-01-01
This paper describes details of four scales of a questionnaire—“Computers in Medical Care”—measuring attributes of computer use, self-reported computer knowledge, computer feature demand, and computer optimism of academic physicians. The reliability (i.e., precision, or degree to which the scale's result is reproducible) and validity (i.e., accuracy, or degree to which the scale actually measures what it is supposed to measure) of each scale were examined by analysis of the responses of 771 full-time academic physicians across four departments at five academic medical centers in the United States. The objectives of this paper were to define the psychometric properties of the scales as the basis for a future demonstration study and, pending the results of further validity studies, to provide the questionnaire and scales to the medical informatics community as a tool for measuring the attitudes of health care providers. Methodology: The dimensionality of each scale and degree of association of each item with the attribute of interest were determined by principal components factor analysis with othogonal varimax rotation. Weakly associated items (factor loading <.40) were deleted. The reliability of each resultant scale was computed using Cronbach's alpha coefficient. Content validity was addressed during scale construction; construct validity was examined through factor analysis and by correlational analyses. Results: Attributes of computer use, computer knowledge, and computer optimism were unidimensional, with the corresponding scales having reliabilities of.79,.91, and.86, respectively. The computer-feature demand attribute differentiated into two dimensions: the first reflecting demand for high-level functionality with reliability of.81 and the second demand for usability with reliability of.69. There were significant positive correlations between computer use, computer knowledge, and computer optimism scale scores and respondents' hands-on computer use, computer training, and self-reported computer sophistication. In addition, items posited on the computer knowledge scale to be more difficult generated significantly lower scores. Conclusion: The four scales of the questionnaire appear to measure with adequate reliability five attributes of academic physicians' attitudes toward computers in medical care: computer use, self-reported computer knowledge, demand for computer functionality, demand for computer usability, and computer optimism. Results of initial validity studies are positive, but further validation of the scales is needed. The URL of a downloadable HTML copy of the questionnaire is provided. PMID:9524349
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2018-02-01
In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.
A method of network topology optimization design considering application process characteristic
NASA Astrophysics Data System (ADS)
Wang, Chunlin; Huang, Ning; Bai, Yanan; Zhang, Shuo
2018-03-01
Communication networks are designed to meet the usage requirements of users for various network applications. The current studies of network topology optimization design mainly considered network traffic, which is the result of network application operation, but not a design element of communication networks. A network application is a procedure of the usage of services by users with some demanded performance requirements, and has obvious process characteristic. In this paper, we first propose a method to optimize the design of communication network topology considering the application process characteristic. Taking the minimum network delay as objective, and the cost of network design and network connective reliability as constraints, an optimization model of network topology design is formulated, and the optimal solution of network topology design is searched by Genetic Algorithm (GA). Furthermore, we investigate the influence of network topology parameter on network delay under the background of multiple process-oriented applications, which can guide the generation of initial population and then improve the efficiency of GA. Numerical simulations show the effectiveness and validity of our proposed method. Network topology optimization design considering applications can improve the reliability of applications, and provide guidance for network builders in the early stage of network design, which is of great significance in engineering practices.
NASA Astrophysics Data System (ADS)
Bottasso, C. L.; Croce, A.; Riboldi, C. E. D.
2014-06-01
The paper presents a novel approach for the synthesis of the open-loop pitch profile during emergency shutdowns. The problem is of interest in the design of wind turbines, as such maneuvers often generate design driving loads on some of the machine components. The pitch profile synthesis is formulated as a constrained optimal control problem, solved numerically using a direct single shooting approach. A cost function expressing a compromise between load reduction and rotor overspeed is minimized with respect to the unknown blade pitch profile. Constraints may include a load reduction not-to-exceed the next dominating loads, a not-to-be-exceeded maximum rotor speed, and a maximum achievable blade pitch rate. Cost function and constraints are computed over a possibly large number of operating conditions, defined so as to cover as well as possible the operating situations encountered in the lifetime of the machine. All such conditions are simulated by using a high-fidelity aeroservoelastic model of the wind turbine, ensuring the accuracy of the evaluation of all relevant parameters. The paper demonstrates the capabilities of the novel proposed formulation, by optimizing the pitch profile of a multi-MW wind turbine. Results show that the procedure can reliably identify optimal pitch profiles that reduce design-driving loads, in a fully automated way.
Hungry pigeons make suboptimal choices, less hungry pigeons do not.
Laude, Jennifer R; Pattison, Kristina F; Zentall, Thomas R
2012-10-01
Hungry animals will often choose suboptimally by being attracted to reliable signals for food that occur infrequently (they gamble) over less reliable signals for food that occur more often. That is, pigeons prefer an option that 50 % of the time provides them with a reliable signal for the appearance of food but 50 % of the time provides them with a reliable signal for the absence of food (overall 50 % reinforcement) over an alternative that always provides them with a signal for the appearance of food 75 % of the time (overall 75 % reinforcement). The pigeons appear to choose impulsively for the possibility of obtaining the reliable signal for reinforcement. There is evidence that greater hunger is associated with greater impulsivity. We tested the hypothesis that if the pigeons were less hungry, they would be less impulsive and, thus, would choose more optimally (i.e., on the basis of the overall probability of reinforcement). We found that hungry pigeons choose the 50 % reinforcement alternative suboptimally but less hungry pigeons prefer the more optimal 75 % reinforcement. Paradoxically, pigeons that needed the food more received less of it. These findings have implications for how level of motivation may also affect human suboptimal choice (e.g., purchase of lottery tickets and playing slot machines).
Space tourism optimized reusable spaceplane design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penn, J.P.; Lindley, C.A.
Market surveys suggest that a viable space tourism industry will require flight rates about two orders of magnitude higher than those required for conventional spacelift. Although enabling round-trip cost goals for a viable space tourism business are about {dollar_sign}240 per pound ({dollar_sign}529/kg), or {dollar_sign}72,000 per passenger round-trip, goals should be about {dollar_sign}50 per pound ({dollar_sign}110/kg) or approximately {dollar_sign}15,000 for a typical passenger and baggage. The lower price will probably open space tourism to the general population. Vehicle reliabilities must approach those of commercial aircraft as closely as possible. This paper addresses the development of spaceplanes optimized for the ultra-high flightmore » rate and high reliability demands of the space tourism mission. It addresses the fundamental operability, reliability, and cost drivers needed to satisfy this mission need. Figures of merit similar to those used to evaluate the economic viability of conventional commercial aircraft are developed, including items such as payload/vehicle dry weight, turnaround time, propellant cost per passenger, and insurance and depreciation costs, which show that infrastructure can be developed for a viable space tourism industry. A reference spaceplane design optimized for space tourism is described. Subsystem allocations for reliability, operability, and costs are made and a route to developing such a capability is discussed. The vehicle{close_quote}s ability to also satisfy the traditional spacelift market is shown. {copyright} {ital 1997 American Institute of Physics.}« less