Real-time motion-based H.263+ frame rate control
NASA Astrophysics Data System (ADS)
Song, Hwangjun; Kim, JongWon; Kuo, C.-C. Jay
1998-12-01
Most existing H.263+ rate control algorithms, e.g. the one adopted in the test model of the near-term (TMN8), focus on the macroblock layer rate control and low latency under the assumptions of with a constant frame rate and through a constant bit rate (CBR) channel. These algorithms do not accommodate the transmission bandwidth fluctuation efficiently, and the resulting video quality can be degraded. In this work, we propose a new H.263+ rate control scheme which supports the variable bit rate (VBR) channel through the adjustment of the encoding frame rate and quantization parameter. A fast algorithm for the encoding frame rate control based on the inherent motion information within a sliding window in the underlying video is developed to efficiently pursue a good tradeoff between spatial and temporal quality. The proposed rate control algorithm also takes the time-varying bandwidth characteristic of the Internet into account and is able to accommodate the change accordingly. Experimental results are provided to demonstrate the superior performance of the proposed scheme.
Flight Evaluation of an Aircraft with Side and Center Stick Controllers and Rate-Limited Ailerons
NASA Technical Reports Server (NTRS)
Deppe, P. R.; Chalk, C. R.; Shafer, M. F.
1996-01-01
As part of an ongoing government and industry effort to study the flying qualities of aircraft with rate-limited control surface actuators, two studies were previously flown to examine an algorithm developed to reduce the tendency for pilot-induced oscillation when rate limiting occurs. This algorithm, when working properly, greatly improved the performance of the aircraft in the first study. In the second study, however, the algorithm did not initially offer as much improvement. The differences between the two studies caused concern. The study detailed in this paper was performed to determine whether the performance of the algorithm was affected by the characteristics of the cockpit controllers. Time delay and flight control system noise were also briefly evaluated. An in-flight simulator, the Calspan Learjet 25, was programmed with a low roll actuator rate limit, and the algorithm was programmed into the flight control system. Side- and center-stick controllers, force and position command signals, a rate-limited feel system, a low-frequency feel system, and a feel system damper were evaluated. The flight program consisted of four flights and 38 evaluations of test configurations. Performance of the algorithm was determined to be unaffected by using side- or center-stick controllers or force or position command signals. The rate-limited feel system performed as well as the rate-limiting algorithm but was disliked by the pilots. The low-frequency feel system and the feel system damper were ineffective. Time delay and noise were determined to degrade the performance of the algorithm.
Visual Perception Based Rate Control Algorithm for HEVC
NASA Astrophysics Data System (ADS)
Feng, Zeqi; Liu, PengYu; Jia, Kebin
2018-01-01
For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.
On Optimizing H. 264/AVC Rate Control by Improving R-D Model and Incorporating HVS Characteristics
NASA Astrophysics Data System (ADS)
Zhu, Zhongjie; Wang, Yuer; Bai, Yongqiang; Jiang, Gangyi
2010-12-01
The state-of-the-art JVT-G012 rate control algorithm of H.264 is improved from two aspects. First, the quadratic rate-distortion (R-D) model is modified based on both empirical observations and theoretical analysis. Second, based on the existing physiological and psychological research findings of human vision, the rate control algorithm is optimized by incorporating the main characteristics of the human visual system (HVS) such as contrast sensitivity, multichannel theory, and masking effect. Experiments are conducted, and experimental results show that the improved algorithm can simultaneously enhance the overall subjective visual quality and improve the rate control precision effectively.
Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm
NASA Technical Reports Server (NTRS)
Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin
1994-01-01
The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.
Self-tuning control of attitude and momentum management for the Space Station
NASA Technical Reports Server (NTRS)
Shieh, L. S.; Sunkel, J. W.; Yuan, Z. Z.; Zhao, X. M.
1992-01-01
This paper presents a hybrid state-space self-tuning design methodology using dual-rate sampling for suboptimal digital adaptive control of attitude and momentum management for the Space Station. This new hybrid adaptive control scheme combines an on-line recursive estimation algorithm for indirectly identifying the parameters of a continuous-time system from the available fast-rate sampled data of the inputs and states and a controller synthesis algorithm for indirectly finding the slow-rate suboptimal digital controller from the designed optimal analog controller. The proposed method enables the development of digitally implementable control algorithms for the robust control of Space Station Freedom with unknown environmental disturbances and slowly time-varying dynamics.
Tuning-free controller to accurately regulate flow rates in a microfluidic network
NASA Astrophysics Data System (ADS)
Heo, Young Jin; Kang, Junsu; Kim, Min Jun; Chung, Wan Kyun
2016-03-01
We describe a control algorithm that can improve accuracy and stability of flow regulation in a microfluidic network that uses a conventional pressure pump system. The algorithm enables simultaneous and independent control of fluid flows in multiple micro-channels of a microfluidic network, but does not require any model parameters or tuning process. We investigate robustness and optimality of the proposed control algorithm and those are verified by simulations and experiments. In addition, the control algorithm is compared with a conventional PID controller to show that the proposed control algorithm resolves critical problems induced by the PID control. The capability of the control algorithm can be used not only in high-precision flow regulation in the presence of disturbance, but in some useful functions for lab-on-a-chip devices such as regulation of volumetric flow rate, interface position control of two laminar flows, valveless flow switching, droplet generation and particle manipulation. We demonstrate those functions and also suggest further potential biological applications which can be accomplished by the proposed control framework.
Tuning-free controller to accurately regulate flow rates in a microfluidic network
Heo, Young Jin; Kang, Junsu; Kim, Min Jun; Chung, Wan Kyun
2016-01-01
We describe a control algorithm that can improve accuracy and stability of flow regulation in a microfluidic network that uses a conventional pressure pump system. The algorithm enables simultaneous and independent control of fluid flows in multiple micro-channels of a microfluidic network, but does not require any model parameters or tuning process. We investigate robustness and optimality of the proposed control algorithm and those are verified by simulations and experiments. In addition, the control algorithm is compared with a conventional PID controller to show that the proposed control algorithm resolves critical problems induced by the PID control. The capability of the control algorithm can be used not only in high-precision flow regulation in the presence of disturbance, but in some useful functions for lab-on-a-chip devices such as regulation of volumetric flow rate, interface position control of two laminar flows, valveless flow switching, droplet generation and particle manipulation. We demonstrate those functions and also suggest further potential biological applications which can be accomplished by the proposed control framework. PMID:26987587
The research of automatic speed control algorithm based on Green CBTC
NASA Astrophysics Data System (ADS)
Lin, Ying; Xiong, Hui; Wang, Xiaoliang; Wu, Youyou; Zhang, Chuanqi
2017-06-01
Automatic speed control algorithm is one of the core technologies of train operation control system. It’s a typical multi-objective optimization control algorithm, which achieve the train speed control for timing, comfort, energy-saving and precise parking. At present, the train speed automatic control technology is widely used in metro and inter-city railways. It has been found that the automatic speed control technology can effectively reduce the driver’s intensity, and improve the operation quality. However, the current used algorithm is poor at energy-saving, even not as good as manual driving. In order to solve the problem of energy-saving, this paper proposes an automatic speed control algorithm based on Green CBTC system. Based on the Green CBTC system, the algorithm can adjust the operation status of the train to improve the efficient using rate of regenerative braking feedback energy while ensuring the timing, comfort and precise parking targets. Due to the reason, the energy-using of Green CBTC system is lower than traditional CBTC system. The simulation results show that the algorithm based on Green CBTC system can effectively reduce the energy-using due to the improvement of the using rate of regenerative braking feedback energy.
Focusing light through random scattering media by four-element division algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin
2018-01-01
The focusing of light through random scattering materials using wavefront shaping is studied in detail. We propose a newfangled approach namely four-element division algorithm to improve the average convergence rate and signal-to-noise ratio of focusing. Using 4096 independently controlled segments of light, the intensity at the target is 72 times enhanced over the original intensity at the same position. The four-element division algorithm and existing phase control algorithms of focusing through scattering media are compared by both of the numerical simulation and the experiment. It is found that four-element division algorithm is particularly advantageous to improve the average convergence rate of focusing.
Design and Verification of a Digital Controller for a 2-Piece Hemispherical Resonator Gyroscope.
Lee, Jungshin; Yun, Sung Wook; Rhim, Jaewook
2016-04-20
A Hemispherical Resonator Gyro (HRG) is the Coriolis Vibratory Gyro (CVG) that measures rotation angle or angular velocity using Coriolis force acting the vibrating mass. A HRG can be used as a rate gyro or integrating gyro without structural modification by simply changing the control scheme. In this paper, differential control algorithms are designed for a 2-piece HRG. To design a precision controller, the electromechanical modelling and signal processing must be pre-performed accurately. Therefore, the equations of motion for the HRG resonator with switched harmonic excitations are derived with the Duhamel Integral method. Electromechanical modeling of the resonator, electric module and charge amplifier is performed by considering the mode shape of a thin hemispherical shell. Further, signal processing and control algorithms are designed. The multi-flexing scheme of sensing, driving cycles and x, y-axis switching cycles is appropriate for high precision and low maneuverability systems. The differential control scheme is easily capable of rejecting the common mode errors of x, y-axis signals and changing the rate integrating mode on basis of these studies. In the rate gyro mode the controller is composed of Phase-Locked Loop (PLL), amplitude, quadrature and rate control loop. All controllers are designed on basis of a digital PI controller. The signal processing and control algorithms are verified through Matlab/Simulink simulations. Finally, a FPGA and DSP board with these algorithms is verified through experiments.
New multirate sampled-data control law structure and synthesis algorithm
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.; Yang, Gen-Sheng
1992-01-01
A new multirate sampled-data control law structure is defined and a new parameter-optimization-based synthesis algorithm for that structure is introduced. The synthesis algorithm can be applied to multirate, multiple-input/multiple-output, sampled-data control laws having a prescribed dynamic order and structure, and a priori specified sampling/update rates for all sensors, processor states, and control inputs. The synthesis algorithm is applied to design two-input, two-output tip position controllers of various dynamic orders for a sixth-order, two-link robot arm model.
Impact of Chaos Functions on Modern Swarm Optimizers.
Emary, E; Zawbaa, Hossam M
2016-01-01
Exploration and exploitation are two essential components for any optimization algorithm. Much exploration leads to oscillation and premature convergence while too much exploitation slows down the optimization algorithm and the optimizer may be stuck in local minima. Therefore, balancing the rates of exploration and exploitation at the optimization lifetime is a challenge. This study evaluates the impact of using chaos-based control of exploration/exploitation rates against using the systematic native control. Three modern algorithms were used in the study namely grey wolf optimizer (GWO), antlion optimizer (ALO) and moth-flame optimizer (MFO) in the domain of machine learning for feature selection. Results on a set of standard machine learning data using a set of assessment indicators prove advance in optimization algorithm performance when using variational repeated periods of declined exploration rates over using systematically decreased exploration rates.
Control Algorithms For Liquid-Cooled Garments
NASA Technical Reports Server (NTRS)
Drew, B.; Harner, K.; Hodgson, E.; Homa, J.; Jennings, D.; Yanosy, J.
1988-01-01
Three algorithms developed for control of cooling in protective garments. Metabolic rate inferred from temperatures of cooling liquid outlet and inlet, suitably filtered to account for thermal lag of human body. Temperature at inlet adjusted to value giving maximum comfort at inferred metabolic rate. Applicable to space suits, used for automatic control of cooling in suits worn by workers in radioactive, polluted, or otherwise hazardous environments. More effective than manual control, subject to frequent, overcompensated adjustments as level of activity varies.
Design and Verification of a Digital Controller for a 2-Piece Hemispherical Resonator Gyroscope
Lee, Jungshin; Yun, Sung Wook; Rhim, Jaewook
2016-01-01
A Hemispherical Resonator Gyro (HRG) is the Coriolis Vibratory Gyro (CVG) that measures rotation angle or angular velocity using Coriolis force acting the vibrating mass. A HRG can be used as a rate gyro or integrating gyro without structural modification by simply changing the control scheme. In this paper, differential control algorithms are designed for a 2-piece HRG. To design a precision controller, the electromechanical modelling and signal processing must be pre-performed accurately. Therefore, the equations of motion for the HRG resonator with switched harmonic excitations are derived with the Duhamel Integral method. Electromechanical modeling of the resonator, electric module and charge amplifier is performed by considering the mode shape of a thin hemispherical shell. Further, signal processing and control algorithms are designed. The multi-flexing scheme of sensing, driving cycles and x, y-axis switching cycles is appropriate for high precision and low maneuverability systems. The differential control scheme is easily capable of rejecting the common mode errors of x, y-axis signals and changing the rate integrating mode on basis of these studies. In the rate gyro mode the controller is composed of Phase-Locked Loop (PLL), amplitude, quadrature and rate control loop. All controllers are designed on basis of a digital PI controller. The signal processing and control algorithms are verified through Matlab/Simulink simulations. Finally, a FPGA and DSP board with these algorithms is verified through experiments. PMID:27104539
Resolved motion rate and resolved acceleration servo-control of wheeled mobile robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muir, P.F.; Neuman, C.P.; Carnegie-Mellon Univ., Pittsburgh, PA
1989-01-01
Accurate motion control of wheeled mobile robots (WMRs) is required for their application to autonomous, semi-autonomous and teleoperated tasks. The similarities between WMRs and stationary manipulators suggest that current, successful, model-based manipulator control algorithms may be applied to WMRs. Special characteristics of WMRs including higher-pairs, closed-chains, friction and unactuated and unsensed joints require innovative modeling methodologies. The WMR modeling challenge has been recently overcome, thus enabling the application of manipulator control algorithms to WMRs. This realization lays the foundation for significant technology transfer from manipulator control to WMR control. We apply two Cartesian-space manipulator control algorithms: resolved motion rate (kinematics-based)more » and resolved acceleration (dynamics-based) control to WMR servo-control. We evaluate simulation studies of two exemplary WMRs: Uranus (a three degree-of-freedom WMR constructed at Carnegie Mellon University), and Bicsun-Bicas (a two degree-of-freedom WMR being constructed at Sandia National Laboratories) under the control of these algorithms. Although resolved motion rate servo-control is adequate for the control of Uranus, resolved acceleration servo-control is required for the control of the mechanically simpler Bicsun-Bicas because it exhibits more dynamic coupling and nonlinearities. Successful accurate motion control of these WMRs in simulation is driving current experimental research studies. 18 refs., 7 figs., 5 tabs.« less
Spacecraft Attitude Tracking and Maneuver Using Combined Magnetic Actuators
NASA Technical Reports Server (NTRS)
Zhou, Zhiqiang
2012-01-01
A paper describes attitude-control algorithms using the combination of magnetic actuators with reaction wheel assemblies (RWAs) or other types of actuators such as thrusters. The combination of magnetic actuators with one or two RWAs aligned with different body axis expands the two-dimensional control torque to three-dimensional. The algorithms can guarantee the spacecraft attitude and rates to track the commanded attitude precisely. A design example is presented for nadir-pointing, pitch, and yaw maneuvers. The results show that precise attitude tracking can be reached and the attitude- control accuracy is comparable with RWA-based attitude control. When there are only one or two workable RWAs due to RWA failures, the attitude-control system can switch to the control algorithms for the combined magnetic actuators with the RWAs without going to the safe mode, and the control accuracy can be maintained. The attitude-control algorithms of the combined actuators are derived, which can guarantee the spacecraft attitude and rates to track the commanded values precisely. Results show that precise attitude tracking can be reached, and the attitude-control accuracy is comparable with 3-axis wheel control.
O'Shaughnessy, P T; Hemenway, D R
2000-10-01
Trials were conducted to determine those factors that affect the accuracy of a direct-reading aerosol photometer when automatically controlling airflow rate within an exposure chamber to regulate airborne dust concentrations. Photometer response was affected by a shift in the aerosol size distribution caused by changes in chamber flow rate. In addition to a dilution effect, flow rate also determined the relative amount of aerosol lost to sedimentation within the chamber. Additional calculations were added to a computer control algorithm to compensate for these effects when attempting to automatically regulate flow based on a proportional-integral-derivative (PID) feedback control algorithm. A comparison between PID-controlled trials and those performed with a constant generator output rate and dilution-air flow rate demonstrated that there was no significant decrease in photometer accuracy despite the many changes in flow rate produced when using PID control. Likewise, the PID-controlled trials produced chamber aerosol concentrations within 1% of a desired level.
Active Control of Wind Tunnel Noise
NASA Technical Reports Server (NTRS)
Hollis, Patrick (Principal Investigator)
1991-01-01
The need for an adaptive active control system was realized, since a wind tunnel is subjected to variations in air velocity, temperature, air turbulence, and some other factors such as nonlinearity. Among many adaptive algorithms, the Least Mean Squares (LMS) algorithm, which is the simplest one, has been used in an Active Noise Control (ANC) system by some researchers. However, Eriksson's results, Eriksson (1985), showed instability in the ANC system with an ER filter for random noise input. The Restricted Least Squares (RLS) algorithm, although computationally more complex than the LMS algorithm, has better convergence and stability properties. The ANC system in the present work was simulated by using an FIR filter with an RLS algorithm for different inputs and for a number of plant models. Simulation results for the ANC system with acoustic feedback showed better robustness when used with the RLS algorithm than with the LMS algorithm for all types of inputs. Overall attenuation in the frequency domain was better in the case of the RLS adaptive algorithm. Simulation results with a more realistic plant model and an RLS adaptive algorithm showed a slower convergence rate than the case with an acoustic plant as a delay plant. However, the attenuation properties were satisfactory for the simulated system with the modified plant. The effect of filter length on the rate of convergence and attenuation was studied. It was found that the rate of convergence decreases with increase in filter length, whereas the attenuation increases with increase in filter length. The final design of the ANC system was simulated and found to have a reasonable convergence rate and good attenuation properties for an input containing discrete frequencies and random noise.
Sinha Gregory, Naina; Seley, Jane Jeffrie; Gerber, Linda M; Tang, Chin; Brillon, David
2016-12-01
More than one-third of hospitalized patients have hyperglycemia. Despite evidence that improving glycemic control leads to better outcomes, achieving recognized targets remains a challenge. The objective of this study was to evaluate the implementation of a computerized insulin order set and titration algorithm on rates of hypoglycemia and overall inpatient glycemic control. A prospective observational study evaluating the impact of a glycemic order set and titration algorithm in an academic medical center in non-critical care medical and surgical inpatients. The initial intervention was hospital-wide implementation of a comprehensive insulin order set. The secondary intervention was initiation of an insulin titration algorithm in two pilot medicine inpatient units. Point of care testing blood glucose reports were analyzed. These reports included rates of hypoglycemia (BG < 70 mg/dL) and hyperglycemia (BG >200 mg/dL in phase 1, BG > 180 mg/dL in phase 2). In the first phase of the study, implementation of the insulin order set was associated with decreased rates of hypoglycemia (1.92% vs 1.61%; p < 0.001) and increased rates of hyperglycemia (24.02% vs 27.27%; p < 0.001) from 2010 to 2011. In the second phase, addition of a titration algorithm was associated with decreased rates of hypoglycemia (2.57% vs 1.82%; p = 0.039) and increased rates of hyperglycemia (31.76% vs 41.33%; p < 0.001) from 2012 to 2013. A comprehensive computerized insulin order set and titration algorithm significantly decreased rates of hypoglycemia. This significant reduction in hypoglycemia was associated with increased rates of hyperglycemia. Hardwiring the algorithm into the electronic medical record may foster adoption.
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch
Karthikeyan, M.; Sree Ranga Raja, T.
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch.
Karthikeyan, M; Raja, T Sree Ranga
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods.
Study on application of adaptive fuzzy control and neural network in the automatic leveling system
NASA Astrophysics Data System (ADS)
Xu, Xiping; Zhao, Zizhao; Lan, Weiyong; Sha, Lei; Qian, Cheng
2015-04-01
This paper discusses the adaptive fuzzy control and neural network BP algorithm in large flat automatic leveling control system application. The purpose is to develop a measurement system with a flat quick leveling, Make the installation on the leveling system of measurement with tablet, to be able to achieve a level in precision measurement work quickly, improve the efficiency of the precision measurement. This paper focuses on the automatic leveling system analysis based on fuzzy controller, Use of the method of combining fuzzy controller and BP neural network, using BP algorithm improve the experience rules .Construct an adaptive fuzzy control system. Meanwhile the learning rate of the BP algorithm has also been run-rate adjusted to accelerate convergence. The simulation results show that the proposed control method can effectively improve the leveling precision of automatic leveling system and shorten the time of leveling.
An End-to-End Loss Discrimination Scheme for Multimedia Transmission over Wireless IP Networks
NASA Astrophysics Data System (ADS)
Zhao, Hai-Tao; Dong, Yu-Ning; Li, Yang
As the rapid growth of wireless IP networks, wireless IP access networks have a lot of potential applications in a variety of fields in civilian and military environments. Many of these applications, such as realtime audio/video streaming, will require some form of end-to-end QoS assurance. In this paper, an algorithm WMPLD (Wireless Multimedia Packet Loss Discrimination) is proposed for multimedia transmission control over wired-wireless hybrid IP networks. The relationship between packet length and packet loss rate in the Gilbert wireless error model is investigated. Furthermore, the algorithm can detect the nature of packet losses by sending large and small packets alternately, and control the sending rate of nodes. In addition, by means of updating factor K, this algorithm can adapt to the changes of network states quickly. Simulation results show that, compared to previous algorithms, WMPLD algorithm can improve the networks throughput as well as reduce the congestion loss rate in various situations.
NASA Astrophysics Data System (ADS)
Hanada, Masaki; Nakazato, Hidenori; Watanabe, Hitoshi
Multimedia applications such as music or video streaming, video teleconferencing and IP telephony are flourishing in packet-switched networks. Applications that generate such real-time data can have very diverse quality-of-service (QoS) requirements. In order to guarantee diverse QoS requirements, the combined use of a packet scheduling algorithm based on Generalized Processor Sharing (GPS) and leaky bucket traffic regulator is the most successful QoS mechanism. GPS can provide a minimum guaranteed service rate for each session and tight delay bounds for leaky bucket constrained sessions. However, the delay bounds for leaky bucket constrained sessions under GPS are unnecessarily large because each session is served according to its associated constant weight until the session buffer is empty. In order to solve this problem, a scheduling policy called Output Rate-Controlled Generalized Processor Sharing (ORC-GPS) was proposed in [17]. ORC-GPS is a rate-based scheduling like GPS, and controls the service rate in order to lower the delay bounds for leaky bucket constrained sessions. In this paper, we propose a call admission control (CAC) algorithm for ORC-GPS, for leaky-bucket constrained sessions with deterministic delay requirements. This CAC algorithm for ORC-GPS determines the optimal values of parameters of ORC-GPS from the deterministic delay requirements of the sessions. In numerical experiments, we compare the CAC algorithm for ORC-GPS with one for GPS in terms of schedulable region and computational complexity.
Ma, Xiaosu; Chien, Jenny Y; Johnson, Jennal; Malone, James; Sinha, Vikram
2017-08-01
The purpose of this prospective, model-based simulation approach was to evaluate the impact of various rapid-acting mealtime insulin dose-titration algorithms on glycemic control (hemoglobin A1c [HbA1c]). Seven stepwise, glucose-driven insulin dose-titration algorithms were evaluated with a model-based simulation approach by using insulin lispro. Pre-meal blood glucose readings were used to adjust insulin lispro doses. Two control dosing algorithms were included for comparison: no insulin lispro (basal insulin+metformin only) or insulin lispro with fixed doses without titration. Of the seven dosing algorithms assessed, daily adjustment of insulin lispro dose, when glucose targets were met at pre-breakfast, pre-lunch, and pre-dinner, sequentially, demonstrated greater HbA1c reduction at 24 weeks, compared with the other dosing algorithms. Hypoglycemic rates were comparable among the dosing algorithms except for higher rates with the insulin lispro fixed-dose scenario (no titration), as expected. The inferior HbA1c response for the "basal plus metformin only" arm supports the additional glycemic benefit with prandial insulin lispro. Our model-based simulations support a simplified dosing algorithm that does not include carbohydrate counting, but that includes glucose targets for daily dose adjustment to maintain glycemic control with a low risk of hypoglycemia.
Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fijany, A.; Milman, M.; Redding, D.
1994-12-31
In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less
MAC Protocol for Ad Hoc Networks Using a Genetic Algorithm
Elizarraras, Omar; Panduro, Marco; Méndez, Aldo L.
2014-01-01
The problem of obtaining the transmission rate in an ad hoc network consists in adjusting the power of each node to ensure the signal to interference ratio (SIR) and the energy required to transmit from one node to another is obtained at the same time. Therefore, an optimal transmission rate for each node in a medium access control (MAC) protocol based on CSMA-CDMA (carrier sense multiple access-code division multiple access) for ad hoc networks can be obtained using evolutionary optimization. This work proposes a genetic algorithm for the transmission rate election considering a perfect power control, and our proposition achieves improvement of 10% compared with the scheme that handles the handshaking phase to adjust the transmission rate. Furthermore, this paper proposes a genetic algorithm that solves the problem of power combining, interference, data rate, and energy ensuring the signal to interference ratio in an ad hoc network. The result of the proposed genetic algorithm has a better performance (15%) compared to the CSMA-CDMA protocol without optimizing. Therefore, we show by simulation the effectiveness of the proposed protocol in terms of the throughput. PMID:25140339
A real-time phoneme counting algorithm and application for speech rate monitoring.
Aharonson, Vered; Aharonson, Eran; Raichlin-Levi, Katia; Sotzianu, Aviv; Amir, Ofer; Ovadia-Blechman, Zehava
2017-03-01
Adults who stutter can learn to control and improve their speech fluency by modifying their speaking rate. Existing speech therapy technologies can assist this practice by monitoring speaking rate and providing feedback to the patient, but cannot provide an accurate, quantitative measurement of speaking rate. Moreover, most technologies are too complex and costly to be used for home practice. We developed an algorithm and a smartphone application that monitor a patient's speaking rate in real time and provide user-friendly feedback to both patient and therapist. Our speaking rate computation is performed by a phoneme counting algorithm which implements spectral transition measure extraction to estimate phoneme boundaries. The algorithm is implemented in real time in a mobile application that presents its results in a user-friendly interface. The application incorporates two modes: one provides the patient with visual feedback of his/her speech rate for self-practice and another provides the speech therapist with recordings, speech rate analysis and tools to manage the patient's practice. The algorithm's phoneme counting accuracy was validated on ten healthy subjects who read a paragraph at slow, normal and fast paces, and was compared to manual counting of speech experts. Test-retest and intra-counter reliability were assessed. Preliminary results indicate differences of -4% to 11% between automatic and human phoneme counting. Differences were largest for slow speech. The application can thus provide reliable, user-friendly, real-time feedback for speaking rate control practice. Copyright © 2017 Elsevier Inc. All rights reserved.
Optimal Control for Aperiodic Dual-Rate Systems With Time-Varying Delays
Salt, Julián; Guinaldo, María; Chacón, Jesús
2018-01-01
In this work, we consider a dual-rate scenario with slow input and fast output. Our objective is the maximization of the decay rate of the system through the suitable choice of the n-input signals between two measures (periodic sampling) and their times of application. The optimization algorithm is extended for time-varying delays in order to make possible its implementation in networked control systems. We provide experimental results in an air levitation system to verify the validity of the algorithm in a real plant. PMID:29747441
Optimal Control for Aperiodic Dual-Rate Systems With Time-Varying Delays.
Aranda-Escolástico, Ernesto; Salt, Julián; Guinaldo, María; Chacón, Jesús; Dormido, Sebastián
2018-05-09
In this work, we consider a dual-rate scenario with slow input and fast output. Our objective is the maximization of the decay rate of the system through the suitable choice of the n -input signals between two measures (periodic sampling) and their times of application. The optimization algorithm is extended for time-varying delays in order to make possible its implementation in networked control systems. We provide experimental results in an air levitation system to verify the validity of the algorithm in a real plant.
Convergence Rates of Finite Difference Stochastic Approximation Algorithms
2016-06-01
dfferences as gradient approximations. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the...descent algorithm, under various updating schemes using finite dfferences as gradient approximations. It is shown that the convergence of these...the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. It
NASA Astrophysics Data System (ADS)
Mazur, Krzysztof; Wrona, Stanislaw; Pawelczyk, Marek
2018-01-01
The paper presents the idea and discussion on implementation of multichannel global active noise control systems. As a test plant an active casing is used. It has been developed by the authors to reduce device noise directly at the source by controlling vibration of its casing. To provide global acoustic effect in the whole environment, where the device operates, it requires a number of secondary sources and sensors for each casing wall, thus making the whole active control structure complicated, i.e. with a large number of interacting channels. The paper discloses all details concerning hardware setup and efficient implementation of control algorithms for the multichannel case. A new formulation is presented to introduce the distributed version of the Switched-error Filtered-reference Least Mean Squares (FXLMS) algorithm together with adaptation rate enhancement. The convergence rate of the proposed algorithm is compared with original Multiple-error FXLMS. A number of hints followed from many years of authors' experience on microprocessor control systems design and signal processing algorithms optimization are presented. They can be used for various active control and signal processing applications, both for academic research and commercialization.
Dailey, George; Aurand, Lisa; Stewart, John; Ameer, Barbara; Zhou, Rong
2014-03-01
Several titration algorithms can be used to adjust insulin dose and attain blood glucose targets. We compared clinical outcomes using three initiation and titration algorithms for insulin glargine in insulin-naive patients with type 2 diabetes mellitus (T2DM); focusing on those receiving both metformin and sulfonylurea (SU) at baseline. This was a pooled analysis of patient-level data from prospective, randomized, controlled 24-week trials. Patients received algorithm 1 (1 IU increase once daily, if fasting plasma glucose [FPG] > target), algorithm 2 (2 IU increase every 3 days, if FPG > target), or algorithm 3 (treat-to-target, generally 2-8 IU increase weekly based on 2-day mean FPG levels). Glycemic control, insulin dose, and hypoglycemic events were compared between algorithms. Overall, 1380 patients were included. In patients receiving metformin and SU at baseline, there were no significant differences in glycemic control between algorithms. Weight-adjusted dose was higher for algorithm 2 vs algorithms 1 and 3 (P = 0.0037 and P < 0.0001, respectively), though results were not significantly different when adjusted for reductions in HbA1c (0.36 IU/kg, 0.43 IU/kg, and 0.31 IU/kg for algorithms 1, 2, and 3, respectively). Yearly hypoglycemic event rates (confirmed blood glucose <56 mg/dL) were higher for algorithm 3 than algorithms 1 (P = 0.0003) and 2 (P < 0.0001). Three algorithms for initiation and titration of insulin glargine in patients with T2DM resulted in similar levels of glycemic control, with lower rates of hypoglycemia for patients treated using simpler algorithms 1 and 2. © 2013 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.
Fu, Xingang; Li, Shuhui; Fairbank, Michael; Wunsch, Donald C; Alonso, Eduardo
2015-09-01
This paper investigates how to train a recurrent neural network (RNN) using the Levenberg-Marquardt (LM) algorithm as well as how to implement optimal control of a grid-connected converter (GCC) using an RNN. To successfully and efficiently train an RNN using the LM algorithm, a new forward accumulation through time (FATT) algorithm is proposed to calculate the Jacobian matrix required by the LM algorithm. This paper explores how to incorporate FATT into the LM algorithm. The results show that the combination of the LM and FATT algorithms trains RNNs better than the conventional backpropagation through time algorithm. This paper presents an analytical study on the optimal control of GCCs, including theoretically ideal optimal and suboptimal controllers. To overcome the inapplicability of the optimal GCC controller under practical conditions, a new RNN controller with an improved input structure is proposed to approximate the ideal optimal controller. The performance of an ideal optimal controller and a well-trained RNN controller was compared in close to real-life power converter switching environments, demonstrating that the proposed RNN controller can achieve close to ideal optimal control performance even under low sampling rate conditions. The excellent performance of the proposed RNN controller under challenging and distorted system conditions further indicates the feasibility of using an RNN to approximate optimal control in practical applications.
Tahriri, Farzad; Dawal, Siti Zawiah Md; Taha, Zahari
2014-01-01
A new multiobjective dynamic fuzzy genetic algorithm is applied to solve a fuzzy mixed-model assembly line sequencing problem in which the primary goals are to minimize the total make-span and minimize the setup number simultaneously. Trapezoidal fuzzy numbers are implemented for variables such as operation and travelling time in order to generate results with higher accuracy and representative of real-case data. An improved genetic algorithm called fuzzy adaptive genetic algorithm (FAGA) is proposed in order to solve this optimization model. In establishing the FAGA, five dynamic fuzzy parameter controllers are devised in which fuzzy expert experience controller (FEEC) is integrated with automatic learning dynamic fuzzy controller (ALDFC) technique. The enhanced algorithm dynamically adjusts the population size, number of generations, tournament candidate, crossover rate, and mutation rate compared with using fixed control parameters. The main idea is to improve the performance and effectiveness of existing GAs by dynamic adjustment and control of the five parameters. Verification and validation of the dynamic fuzzy GA are carried out by developing test-beds and testing using a multiobjective fuzzy mixed production assembly line sequencing optimization problem. The simulation results highlight that the performance and efficacy of the proposed novel optimization algorithm are more efficient than the performance of the standard genetic algorithm in mixed assembly line sequencing model. PMID:24982962
Minimum airflow reset of single-duct VAV terminal boxes
NASA Astrophysics Data System (ADS)
Cho, Young-Hum
Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and applied to actual systems for performance validation. The results of the theoretical analysis, numeric simulations, and experiments show that the optimal control algorithms can automatically identify the minimum rate of heating airflow under actual working conditions. Improved control helps to stabilize room air temperatures. The vertical difference in the room air temperature was lower than the comfort value. Measurements of room CO2 levels indicate that when the minimum airflow set point was reduced it did not adversely affect the indoor air quality. According to the measured energy results, optimal control algorithms give a lower rate of reheating energy consumption than conventional controls.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Andrews; Spyridon Antonakopoulos; Steve Fortune
2011-07-12
This Concept Definition Study focused on developing a scientific understanding of methods to reduce energy consumption in data networks using rate adaptation. Rate adaptation is a collection of techniques that reduce energy consumption when traffic is light, and only require full energy when traffic is at full provisioned capacity. Rate adaptation is a very promising technique for saving energy: modern data networks are typically operated at average rates well below capacity, but network equipment has not yet been designed to incorporate rate adaptation. The Study concerns packet-switching equipment, routers and switches; such equipment forms the backbone of the modern Internet.more » The focus of the study is on algorithms and protocols that can be implemented in software or firmware to exploit hardware power-control mechanisms. Hardware power-control mechanisms are widely used in the computer industry, and are beginning to be available for networking equipment as well. Network equipment has different performance requirements than computer equipment because of the very fast rate of packet arrival; hence novel power-control algorithms are required for networking. This study resulted in five published papers, one internal report, and two patent applications, documented below. The specific technical accomplishments are the following: • A model for the power consumption of switching equipment used in service-provider telecommunication networks as a function of operating state, and measured power-consumption values for typical current equipment. • An algorithm for use in a router that adapts packet processing rate and hence power consumption to traffic load while maintaining performance guarantees on delay and throughput. • An algorithm that performs network-wide traffic routing with the objective of minimizing energy consumption, assuming that routers have less-than-ideal rate adaptivity. • An estimate of the potential energy savings in service-provider networks using feasibly-implementable rate adaptivity. • A buffer-management algorithm that is designed to reduce the size of router buffers, and hence energy consumed. • A packet-scheduling algorithm designed to minimize packet-processing energy requirements. Additional research is recommended in at least two areas: further exploration of rate-adaptation in network switching equipment, including incorporation of rate-adaptation in actual hardware, allowing experimentation in operational networks; and development of control protocols that allow parts of networks to be shut down while minimizing disruption to traffic flow in the network. The research is an integral part of a large effort within Bell Laboratories, Alcatel-Lucent, aimed at dramatic improvements in the energy efficiency of telecommunication networks. This Study did not explicitly consider any commercialization opportunities.« less
Burton, Tanya; Le Nestour, Elisabeth; Neary, Maureen; Ludlam, William H
2016-04-01
This study aimed to develop an algorithm to identify patients with CD, and quantify the clinical and economic burden that patients with CD face compared to CD-free controls. A retrospective cohort study of CD patients was conducted in a large US commercial health plan database between 1/1/2007 and 12/31/2011. A control group with no evidence of CD during the same time was matched 1:3 based on demographics. Comorbidity rates were compared using Poisson and health care costs were compared using robust variance estimation. A case-finding algorithm identified 877 CD patients, who were matched to 2631 CD-free controls. The age and sex distribution of the selected population matched the known epidemiology of CD. CD patients were found to have comorbidity rates that were two to five times higher and health care costs that were four to seven times higher than CD-free controls. An algorithm based on eight pituitary conditions and procedures appeared to identify CD patients in a claims database without a unique diagnosis code. Young CD patients had high rates of comorbidities that are more commonly observed in an older population (e.g., diabetes, hypertension, and cardiovascular disease). Observed health care costs were also high for CD patients compared to CD-free controls, but may have been even higher if the sample had included healthier controls with no health care use as well. Earlier diagnosis, improved surgery success rates, and better treatments may all help to reduce the chronic comorbidity and high health care costs associated with CD.
Bouc-Wen hysteresis model identification using Modified Firefly Algorithm
NASA Astrophysics Data System (ADS)
Zaman, Mohammad Asif; Sikder, Urmita
2015-12-01
The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.
Accommodation of practical constraints by a linear programming jet select. [for Space Shuttle
NASA Technical Reports Server (NTRS)
Bergmann, E.; Weiler, P.
1983-01-01
An experimental spacecraft control system will be incorporated into the Space Shuttle flight software and exercised during a forthcoming mission to evaluate its performance and handling qualities. The control system incorporates a 'phase space' control law to generate rate change requests and a linear programming jet select to compute jet firings. Posed as a linear programming problem, jet selection must represent the rate change request as a linear combination of jet acceleration vectors where the coefficients are the jet firing times, while minimizing the fuel expended in satisfying that request. This problem is solved in real time using a revised Simplex algorithm. In order to implement the jet selection algorithm in the Shuttle flight control computer, it was modified to accommodate certain practical features of the Shuttle such as limited computer throughput, lengthy firing times, and a large number of control jets. To the authors' knowledge, this is the first such application of linear programming. It was made possible by careful consideration of the jet selection problem in terms of the properties of linear programming and the Simplex algorithm. These modifications to the jet select algorithm may by useful for the design of reaction controlled spacecraft.
Clustering of tethered satellite system simulation data by an adaptive neuro-fuzzy algorithm
NASA Technical Reports Server (NTRS)
Mitra, Sunanda; Pemmaraju, Surya
1992-01-01
Recent developments in neuro-fuzzy systems indicate that the concepts of adaptive pattern recognition, when used to identify appropriate control actions corresponding to clusters of patterns representing system states in dynamic nonlinear control systems, may result in innovative designs. A modular, unsupervised neural network architecture, in which fuzzy learning rules have been embedded is used for on-line identification of similar states. The architecture and control rules involved in Adaptive Fuzzy Leader Clustering (AFLC) allow this system to be incorporated in control systems for identification of system states corresponding to specific control actions. We have used this algorithm to cluster the simulation data of Tethered Satellite System (TSS) to estimate the range of delta voltages necessary to maintain the desired length rate of the tether. The AFLC algorithm is capable of on-line estimation of the appropriate control voltages from the corresponding length error and length rate error without a priori knowledge of their membership functions and familarity with the behavior of the Tethered Satellite System.
A Self Adaptive Differential Evolution Algorithm for Global Optimization
NASA Astrophysics Data System (ADS)
Kumar, Pravesh; Pant, Millie
This paper presents a new Differential Evolution algorithm based on hybridization of adaptive control parameters and trigonometric mutation. First we propose a self adaptive DE named ADE where choice of control parameter F and Cr is not fixed at some constant value but is taken iteratively. The proposed algorithm is further modified by applying trigonometric mutation in it and the corresponding algorithm is named as ATDE. The performance of ATDE is evaluated on the set of 8 benchmark functions and the results are compared with the classical DE algorithm in terms of average fitness function value, number of function evaluations, convergence time and success rate. The numerical result shows the competence of the proposed algorithm.
Control of a High Speed Flywheel System for Energy Storage in Space Applications
NASA Technical Reports Server (NTRS)
Kenny, Barbara H.; Kascak, Peter E.; Jansen, Ralph; Dever, Timothy; Santiago, Walter
2004-01-01
A novel control algorithm for the charge and discharge modes of operation of a flywheel energy storage system for space applications is presented. The motor control portion of the algorithm uses sensorless field oriented control with position and speed estimates determined from a signal injection technique at low speeds and a back EMF technique at higher speeds. The charge and discharge portion of the algorithm use command feed-forward and disturbance decoupling, respectively, to achieve fast response with low gains. Simulation and experimental results are presented demonstrating the successful operation of the flywheel control up to the rated speed of 60,000 rpm.
Model Predictive Control Based Motion Drive Algorithm for a Driving Simulator
NASA Astrophysics Data System (ADS)
Rehmatullah, Faizan
In this research, we develop a model predictive control based motion drive algorithm for the driving simulator at Toronto Rehabilitation Institute. Motion drive algorithms exploit the limitations of the human vestibular system to formulate a perception of motion within the constrained workspace of a simulator. In the absence of visual cues, the human perception system is unable to distinguish between acceleration and the force of gravity. The motion drive algorithm determines control inputs to displace the simulator platform, and by using the resulting inertial forces and angular rates, creates the perception of motion. By using model predictive control, we can optimize the use of simulator workspace for every maneuver while simulating the vehicle perception. With the ability to handle nonlinear constraints, the model predictive control allows us to incorporate workspace limitations.
IDMA-Based MAC Protocol for Satellite Networks with Consideration on Channel Quality
2014-01-01
In order to overcome the shortcomings of existing medium access control (MAC) protocols based on TDMA or CDMA in satellite networks, interleave division multiple access (IDMA) technique is introduced into satellite communication networks. Therefore, a novel wide-band IDMA MAC protocol based on channel quality is proposed in this paper, consisting of a dynamic power allocation algorithm, a rate adaptation algorithm, and a call admission control (CAC) scheme. Firstly, the power allocation algorithm combining the technique of IDMA SINR-evolution and channel quality prediction is developed to guarantee high power efficiency even in terrible channel conditions. Secondly, the effective rate adaptation algorithm, based on accurate channel information per timeslot and by the means of rate degradation, can be realized. What is more, based on channel quality prediction, the CAC scheme, combining the new power allocation algorithm, rate scheduling, and buffering strategies together, is proposed for the emerging IDMA systems, which can support a variety of traffic types, and offering quality of service (QoS) requirements corresponding to different priority levels. Simulation results show that the new wide-band IDMA MAC protocol can make accurate estimation of available resource considering the effect of multiuser detection (MUD) and QoS requirements of multimedia traffic, leading to low outage probability as well as high overall system throughput. PMID:25126592
A Real-Time Position-Locating Algorithm for CCD-Based Sunspot Tracking
NASA Technical Reports Server (NTRS)
Taylor, Jaime R.
1996-01-01
NASA Marshall Space Flight Center's (MSFC) EXperimental Vector Magnetograph (EXVM) polarimeter measures the sun's vector magnetic field. These measurements are taken to improve understanding of the sun's magnetic field in the hopes to better predict solar flares. Part of the procedure for the EXVM requires image motion stabilization over a period of a few minutes. A high speed tracker can be used to reduce image motion produced by wind loading on the EXVM, fluctuations in the atmosphere and other vibrations. The tracker consists of two elements, an image motion detector and a control system. The image motion detector determines the image movement from one frame to the next and sends an error signal to the control system. For the ground based application to reduce image motion due to atmospheric fluctuations requires an error determination at the rate of at least 100 hz. It would be desirable to have an error determination rate of 1 kHz to assure that higher rate image motion is reduced and to increase the control system stability. Two algorithms are presented that are typically used for tracking. These algorithms are examined for their applicability for tracking sunspots, specifically their accuracy if only one column and one row of CCD pixels are used. To examine the accuracy of this method two techniques are used. One involves moving a sunspot image a known distance with computer software, then applying the particular algorithm to see how accurately it determines this movement. The second technique involves using a rate table to control the object motion, then applying the algorithms to see how accurately each determines the actual motion. Results from these two techniques are presented.
Distributed autonomous systems: resource management, planning, and control algorithms
NASA Astrophysics Data System (ADS)
Smith, James F., III; Nguyen, ThanhVu H.
2005-05-01
Distributed autonomous systems, i.e., systems that have separated distributed components, each of which, exhibit some degree of autonomy are increasingly providing solutions to naval and other DoD problems. Recently developed control, planning and resource allocation algorithms for two types of distributed autonomous systems will be discussed. The first distributed autonomous system (DAS) to be discussed consists of a collection of unmanned aerial vehicles (UAVs) that are under fuzzy logic control. The UAVs fly and conduct meteorological sampling in a coordinated fashion determined by their fuzzy logic controllers to determine the atmospheric index of refraction. Once in flight no human intervention is required. A fuzzy planning algorithm determines the optimal trajectory, sampling rate and pattern for the UAVs and an interferometer platform while taking into account risk, reliability, priority for sampling in certain regions, fuel limitations, mission cost, and related uncertainties. The real-time fuzzy control algorithm running on each UAV will give the UAV limited autonomy allowing it to change course immediately without consulting with any commander, request other UAVs to help it, alter its sampling pattern and rate when observing interesting phenomena, or to terminate the mission and return to base. The algorithms developed will be compared to a resource manager (RM) developed for another DAS problem related to electronic attack (EA). This RM is based on fuzzy logic and optimized by evolutionary algorithms. It allows a group of dissimilar platforms to use EA resources distributed throughout the group. For both DAS types significant theoretical and simulation results will be presented.
NASA Astrophysics Data System (ADS)
Gramajo, German G.
This thesis presents an algorithm for a search and coverage mission that has increased autonomy in generating an ideal trajectory while explicitly considering the available energy in the optimization. Further, current algorithms used to generate trajectories depend on the operator providing a discrete set of turning rate requirements to obtain an optimal solution. This work proposes an additional modification to the algorithm so that it optimizes the trajectory for a range of turning rates instead of a discrete set of turning rates. This thesis conducts an evaluation of the algorithm with variation in turn duration, entry-heading angle, and entry point. Comparative studies of the algorithm with existing method indicates improved autonomy in choosing the optimization parameters while producing trajectories with better coverage area and closer final distance to the desired terminal point.
Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1
NASA Technical Reports Server (NTRS)
Park, Thomas; Oliver, Emerson; Smith, Austin
2018-01-01
The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GN&C software from the set of healthy measurements. This paper provides an overview of the algorithms used for both fault-detection and measurement down selection.
Peak reduction for commercial buildings using energy storage
NASA Astrophysics Data System (ADS)
Chua, K. H.; Lim, Y. S.; Morris, S.
2017-11-01
Battery-based energy storage has emerged as a cost-effective solution for peak reduction due to the decrement of battery’s price. In this study, a battery-based energy storage system is developed and implemented to achieve an optimal peak reduction for commercial customers with the limited energy capacity of the energy storage. The energy storage system is formed by three bi-directional power converter rated at 5 kVA and a battery bank with capacity of 64 kWh. Three control algorithms, namely fixed-threshold, adaptive-threshold, and fuzzy-based control algorithms have been developed and implemented into the energy storage system in a campus building. The control algorithms are evaluated and compared under different load conditions. The overall experimental results show that the fuzzy-based controller is the most effective algorithm among the three controllers in peak reduction. The fuzzy-based control algorithm is capable of incorporating a priori qualitative knowledge and expertise about the load characteristic of the buildings as well as the useable energy without over-discharging the batteries.
Dijkman, B; Wellens, H J
2001-09-01
The 7250 Jewel AF Medtronic model of ICD is the first implantable device in which both therapies for atrial arrhythmias and pacing algorithms for atrial arrhythmia prevention are available. Feasibility of that extensive atrial arrhythmia management requires correct and synergic functioning of different algorithms to control arrhythmias. The ability of the new pacing algorithms to stabilize the atrial rate following termination of treated atrial arrhythmias was evaluated in the marker channel registration of 600 spontaneously occurring episodes in 15 patients with the Jewel AF. All patients (55+/-15 years) had structural heart disease and documented atrial and ventricular arrhythmias. Dual chamber rate stabilization pacing was present in 245 (41 %) of episodes following arrhythmia termination and was a part of the mode switching operation during which pacing was provided in the dynamic DDI mode. This algorithm could function as the atrial rate stabilization pacing only when there was a slow spontaneous atrial rhythm or in presence of atrial premature beats conducted to the ventricles with a normal AV time. In case of atrial premature beats with delayed or absent conduction to the ventricles and in case of ventricular premature beats, the algorithm stabilized the ventricular rate. The rate stabilization pacing in DDI mode during sinus rhythm following atrial arrhythmia termination was often extended in time due to the device-based definition of arrhythmia termination. This was also the case in patients, in whom the DDD mode with true atrial rate stabilization algorithm was programmed. The rate stabilization algorithms in the Jewel AF applied after atrial arrhythmia termination provide pacing that is not based on the timing of atrial events. Only under certain circumstances the algorithm can function as atrial rate stabilization pacing. Adjustments in availability and functioning of the rate stabilization algorithms might be of benefit for the clinical performance of pacing as part of device therapy for atrial arrhythmias.
An on-line modified least-mean-square algorithm for training neurofuzzy controllers.
Tan, Woei Wan
2007-04-01
The problem hindering the use of data-driven modelling methods for training controllers on-line is the lack of control over the amount by which the plant is excited. As the operating schedule determines the information available on-line, the knowledge of the process may degrade if the setpoint remains constant for an extended period. This paper proposes an identification algorithm that alleviates "learning interference" by incorporating fuzzy theory into the normalized least-mean-square update rule. The ability of the proposed methodology to achieve faster learning is examined by employing the algorithm to train a neurofuzzy feedforward controller for controlling a liquid level process. Since the proposed identification strategy has similarities with the normalized least-mean-square update rule and the recursive least-square estimator, the on-line learning rates of these algorithms are also compared.
Flow-rate control for managing communications in tracking and surveillance networks
NASA Astrophysics Data System (ADS)
Miller, Scott A.; Chong, Edwin K. P.
2007-09-01
This paper describes a primal-dual distributed algorithm for managing communications in a bandwidth-limited sensor network for tracking and surveillance. The algorithm possesses some scale-invariance properties and adaptive gains that make it more practical for applications such as tracking where the conditions change over time. A simulation study comparing this algorithm with a priority-queue-based approach in a network tracking scenario shows significant improvement in the resulting track quality when using flow control to manage communications.
Basic Research in Digital Stochastic Model Algorithmic Control.
1980-11-01
IDCOM Description 115 8.2 Basic Control Computation 117 8.3 Gradient Algorithm 119 8.4 Simulation Model 119 8.5 Model Modifications 123 8.6 Summary 124...constraints, and 3) control traJectorv comouta- tion. 2.1.1 Internal Model of the System The multivariable system to be controlled is represented by a...more flexible and adaptive, since the model , criteria, and sampling rates can be adjusted on-line. This flexibility comes from the use of the impulse
Kim, Yeoun Jae; Seo, Jong Hyun; Kim, Hong Rae; Kim, Kwang Gi
2017-06-01
Clinicians who frequently perform ultrasound scanning procedures often suffer from musculoskeletal disorders, arthritis, and myalgias. To minimize their occurrence and to assist clinicians, ultrasound scanning robots have been developed worldwide. Although, to date, there is still no commercially available ultrasound scanning robot, many control methods have been suggested and researched. These control algorithms are either image based or force based. If the ultrasound scanning robot control algorithm was a combination of the two algorithms, it could benefit from the advantage of each one. However, there are no existing control methods for ultrasound scanning robots that combine force control and image analysis. Therefore, in this work, a control algorithm is developed for an ultrasound scanning robot using force feedback and ultrasound image analysis. A manipulator-type ultrasound scanning robot named 'NCCUSR' is developed and a control algorithm for this robot is suggested and verified. First, conventional hybrid position-force control is implemented for the robot and the hybrid position-force control algorithm is combined with ultrasound image analysis to fully control the robot. The control method is verified using a thyroid phantom. It was found that the proposed algorithm can be applied to control the ultrasound scanning robot and experimental outcomes suggest that the images acquired using the proposed control method can yield a rating score that is equivalent to images acquired directly by the clinicians. The proposed control method can be applied to control the ultrasound scanning robot. However, more work must be completed to verify the proposed control method in order to become clinically feasible. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Rate-based congestion control in networks with smart links, revision. B.S. Thesis - May 1988
NASA Technical Reports Server (NTRS)
Heybey, Andrew Tyrrell
1990-01-01
The author uses a network simulator to explore rate-based congestion control in networks with smart links that can feed back information to tell senders to adjust their transmission rates. This method differs in a very important way from congestion control in which a congested network component just drops packets - the most commonly used method. It is clearly advantageous for the links in the network to communicate with the end users about the network capacity, rather than the users unilaterally picking a transmission rate. The components in the middle of the network, not the end users, have information about the capacity and traffic in the network. The author experiments with three different algorithms for calculating the control rate to feed back to the users. All of the algorithms exhibit problems in the form of large queues when simulated with a configuration modeling the dynamics of a packet-voice system. However, the problems are not with the algorithms themselves, but with the fact that feedback takes time. If the network steady-state utilization is low enough that it can absorb transients in the traffic through it, then the large queues disappear. If the users are modified to start sending slowly, to allow the network to adapt to a new flow without causing congestion, a greater portion of the network's bandwidth can be used.
The force control and path planning of electromagnetic induction-based massage robot.
Wang, Wendong; Zhang, Lei; Li, Jinzhe; Yuan, Xiaoqing; Shi, Yikai; Jiang, Qinqin; He, Lijing
2017-07-20
Massage robot is considered as an effective physiological treatment to relieve fatigue, improve blood circulation, relax muscle tone, etc. The simple massage equipment quickly spread into market due to low cost, but they are not widely accepted due to restricted massage function. Complicated structure and high cost caused difficulties for developing multi-function massage equipment. This paper presents a novel massage robot which can achieve tapping, rolling, kneading and other massage operations, and proposes an improved reciprocating path planning algorithm to improve massage effect. The number of coil turns, the coil current and the distance between massage head and yoke were chosen to investigate the influence on massage force by finite element method. The control system model of the wheeled massage robot was established, including control subsystem of the motor, path algorithm control subsystem, parameter module of the massage robot and virtual reality interface module. The improved reciprocating path planning algorithm was proposed to improve regional coverage rate and massage effect. The influence caused by coil current, the number of coil turns and the distance between massage head and yoke were simulated in Maxwell. It indicated that coil current has more important influence compared to the other two factors. The path planning simulation of the massage robot was completed in Matlab, and the results show that the improved reciprocating path planning algorithm achieved higher coverage rate than the traditional algorithm. With the analysis of simulation results, it can be concluded that the number of coil turns and the distance between the moving iron core and the yoke could be determined prior to coil current, and the force can be controllable by optimizing structure parameters of massage head and adjusting coil current. Meanwhile, it demonstrates that the proposed algorithm could effectively improve path coverage rate during massage operations, therefore the massage effect can be improved.
Vehicle active steering control research based on two-DOF robust internal model control
NASA Astrophysics Data System (ADS)
Wu, Jian; Liu, Yahui; Wang, Fengbo; Bao, Chunjiang; Sun, Qun; Zhao, Youqun
2016-07-01
Because of vehicle's external disturbances and model uncertainties, robust control algorithms have obtained popularity in vehicle stability control. The robust control usually gives up performance in order to guarantee the robustness of the control algorithm, therefore an improved robust internal model control(IMC) algorithm blending model tracking and internal model control is put forward for active steering system in order to reach high performance of yaw rate tracking with certain robustness. The proposed algorithm inherits the good model tracking ability of the IMC control and guarantees robustness to model uncertainties. In order to separate the design process of model tracking from the robustness design process, the improved 2 degree of freedom(DOF) robust internal model controller structure is given from the standard Youla parameterization. Simulations of double lane change maneuver and those of crosswind disturbances are conducted for evaluating the robust control algorithm, on the basis of a nonlinear vehicle simulation model with a magic tyre model. Results show that the established 2-DOF robust IMC method has better model tracking ability and a guaranteed level of robustness and robust performance, which can enhance the vehicle stability and handling, regardless of variations of the vehicle model parameters and the external crosswind interferences. Contradiction between performance and robustness of active steering control algorithm is solved and higher control performance with certain robustness to model uncertainties is obtained.
A novel frame-level constant-distortion bit allocation for smooth H.264/AVC video quality
NASA Astrophysics Data System (ADS)
Liu, Li; Zhuang, Xinhua
2009-01-01
It is known that quality fluctuation has a major negative effect on visual perception. In previous work, we introduced a constant-distortion bit allocation method [1] for H.263+ encoder. However, the method in [1] can not be adapted to the newest H.264/AVC encoder directly as the well-known chicken-egg dilemma resulted from the rate-distortion optimization (RDO) decision process. To solve this problem, we propose a new two stage constant-distortion bit allocation (CDBA) algorithm with enhanced rate control for H.264/AVC encoder. In stage-1, the algorithm performs RD optimization process with a constant quantization QP. Based on prediction residual signals from stage-1 and target distortion for smooth video quality purpose, the frame-level bit target is allocated by using a close-form approximations of ratedistortion relationship similar to [1], and a fast stage-2 encoding process is performed with enhanced basic unit rate control. Experimental results show that, compared with original rate control algorithm provided by H.264/AVC reference software JM12.1, the proposed constant-distortion frame-level bit allocation scheme reduces quality fluctuation and delivers much smoother PSNR on all testing sequences.
NASA Technical Reports Server (NTRS)
Davis, M. W.
1984-01-01
A Real-Time Self-Adaptive (RTSA) active vibration controller was used as the framework in developing a computer program for a generic controller that can be used to alleviate helicopter vibration. Based upon on-line identification of system parameters, the generic controller minimizes vibration in the fuselage by closed-loop implementation of higher harmonic control in the main rotor system. The new generic controller incorporates a set of improved algorithms that gives the capability to readily define many different configurations by selecting one of three different controller types (deterministic, cautious, and dual), one of two linear system models (local and global), and one or more of several methods of applying limits on control inputs (external and/or internal limits on higher harmonic pitch amplitude and rate). A helicopter rotor simulation analysis was used to evaluate the algorithms associated with the alternative controller types as applied to the four-bladed H-34 rotor mounted on the NASA Ames Rotor Test Apparatus (RTA) which represents the fuselage. After proper tuning all three controllers provide more effective vibration reduction and converge more quickly and smoothly with smaller control inputs than the initial RTSA controller (deterministic with external pitch-rate limiting). It is demonstrated that internal limiting of the control inputs a significantly improves the overall performance of the deterministic controller.
Efficient Online Optimized Quantum Control for Adiabatic Quantum Computation
NASA Astrophysics Data System (ADS)
Quiroz, Gregory
Adiabatic quantum computation (AQC) relies on controlled adiabatic evolution to implement a quantum algorithm. While control evolution can take many forms, properly designed time-optimal control has been shown to be particularly advantageous for AQC. Grover's search algorithm is one such example where analytically-derived time-optimal control leads to improved scaling of the minimum energy gap between the ground state and first excited state and thus, the well-known quadratic quantum speedup. Analytical extensions beyond Grover's search algorithm present a daunting task that requires potentially intractable calculations of energy gaps and a significant degree of model certainty. Here, an in situ quantum control protocol is developed for AQC. The approach is shown to yield controls that approach the analytically-derived time-optimal controls for Grover's search algorithm. In addition, the protocol's convergence rate as a function of iteration number is shown to be essentially independent of system size. Thus, the approach is potentially scalable to many-qubit systems.
Controlling Item Exposure Conditional on Ability in Computerized Adaptive Testing.
ERIC Educational Resources Information Center
Stocking, Martha L.; Lewis, Charles
1998-01-01
Ensuring item and pool security in a continuous testing environment is explored through a new method of controlling exposure rate of items conditional on ability level in computerized testing. Properties of this conditional control on exposure rate, when used in conjunction with a particular adaptive testing algorithm, are explored using simulated…
Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong
2018-01-01
Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm–neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS. PMID:29671822
Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong
2018-04-19
Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm⁻neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS.
Generation of Synthetic Spike Trains with Defined Pairwise Correlations
Niebur, Ernst
2008-01-01
Recent technological advances as well as progress in theoretical understanding of neural systems have created a need for synthetic spike trains with controlled mean rate and pairwise cross-correlation. This report introduces and analyzes a novel algorithm for the generation of discretized spike trains with arbitrary mean rates and controlled cross correlation. Pairs of spike trains with any pairwise correlation can be generated, and higher-order correlations are compatible with common synaptic input. Relations between allowable mean rates and correlations within a population are discussed. The algorithm is highly efficient, its complexity increasing linearly with the number of spike trains generated and therefore inversely with the number of cross-correlated pairs. PMID:17521277
Vega roll and attitude control system algorithms trade-off study
NASA Astrophysics Data System (ADS)
Paulino, N.; Cuciniello, G.; Cruciani, I.; Corraro, F.; Spallotta, D.; Nebula, F.
2013-12-01
This paper describes the trade-off study for the selection of the most suitable algorithms for the Roll and Attitude Control System (RACS) within the FPS-A program, aimed at developing the new Flight Program Software of VEGA Launcher. Two algorithms were analyzed: Switching Lines (SL) and Quaternion Feedback Regulation. Using a development simulation tool that models two critical flight phases (Long Coasting Phase (LCP) and Payload Release (PLR) Phase), both algorithms were assessed with Monte Carlo batch simulations for both of the phases. The statistical outcomes of the results demonstrate a 100 percent success rate for Quaternion Feedback Regulation, and support the choice of this method.
Fair and efficient network congestion control based on minority game
NASA Astrophysics Data System (ADS)
Wang, Zuxi; Wang, Wen; Hu, Hanping; Deng, Zhaozhang
2011-12-01
Low link utility, RTT unfairness and unfairness of Multi-Bottleneck network are the existing problems in the present network congestion control algorithms at large. Through the analogy of network congestion control with the "El Farol Bar" problem, we establish a congestion control model based on minority game(MG), and then present a novel network congestion control algorithm based on the model. The result of simulations indicates that the proposed algorithm can make the achievements of link utility closing to 100%, zero packet lose rate, and small of queue size. Besides, the RTT unfairness and the unfairness of Multi-Bottleneck network can be solved, to achieve the max-min fairness in Multi-Bottleneck network, while efficiently weaken the "ping-pong" oscillation caused by the overall synchronization.
Using game theory for perceptual tuned rate control algorithm in video coding
NASA Astrophysics Data System (ADS)
Luo, Jiancong; Ahmad, Ishfaq
2005-03-01
This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.
NASA Technical Reports Server (NTRS)
Molusis, J. A.; Mookerjee, P.; Bar-Shalom, Y.
1983-01-01
Effect of nonlinearity on convergence of the local linear and global linear adaptive controllers is evaluated. A nonlinear helicopter vibration model is selected for the evaluation which has sufficient nonlinearity, including multiple minimum, to assess the vibration reduction capability of the adaptive controllers. The adaptive control algorithms are based upon a linear transfer matrix assumption and the presence of nonlinearity has a significant effect on algorithm behavior. Simulation results are presented which demonstrate the importance of the caution property in the global linear controller. Caution is represented by a time varying rate weighting term in the local linear controller and this improves the algorithm convergence. Nonlinearity in some cases causes Kalman filter divergence. Two forms of the Kalman filter covariance equation are investigated.
An improved affine projection algorithm for active noise cancellation
NASA Astrophysics Data System (ADS)
Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo
2017-08-01
Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.
Predicting fruit fly's sensing rate with insect flight simulations.
Chang, Song; Wang, Z Jane
2014-08-05
Without sensory feedback, flies cannot fly. Exactly how various feedback controls work in insects is a complex puzzle to solve. What do insects measure to stabilize their flight? How often and how fast must insects adjust their wings to remain stable? To gain insights into algorithms used by insects to control their dynamic instability, we develop a simulation tool to study free flight. To stabilize flight, we construct a control algorithm that modulates wing motion based on discrete measurements of the body-pitch orientation. Our simulations give theoretical bounds on both the sensing rate and the delay time between sensing and actuation. Interpreting our findings together with experimental results on fruit flies' reaction time and sensory motor reflexes, we conjecture that fruit flies sense their kinematic states every wing beat to stabilize their flight. We further propose a candidate for such a control involving the fly's haltere and first basalar motor neuron. Although we focus on fruit flies as a case study, the framework for our simulation and discrete control algorithms is applicable to studies of both natural and man-made fliers.
F-8C adaptive control law refinement and software development
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Stein, G.
1981-01-01
An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.
Fixed-rate layered multicast congestion control
NASA Astrophysics Data System (ADS)
Bing, Zhang; Bing, Yuan; Zengji, Liu
2006-10-01
A new fixed-rate layered multicast congestion control algorithm called FLMCC is proposed. The sender of a multicast session transmits data packets at a fixed rate on each layer, while receivers each obtain different throughput by cumulatively subscribing to deferent number of layers based on their expected rates. In order to provide TCP-friendliness and estimate the expected rate accurately, a window-based mechanism implemented at receivers is presented. To achieve this, each receiver maintains a congestion window, adjusts it based on the GAIMD algorithm, and from the congestion window an expected rate is calculated. To measure RTT, a new method is presented which combines an accurate measurement with a rough estimation. A feedback suppression based on a random timer mechanism is given to avoid feedback implosion in the accurate measurement. The protocol is simple in its implementation. Simulations indicate that FLMCC shows good TCP-friendliness, responsiveness as well as intra-protocol fairness, and provides high link utilization.
Operational performance of the three bean salad control algorithm on the ACRR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ball, R.M.; Madaras, J.J.; Trowbridge, F.R. Jr.
Experimental tests on the Annular Core Research Reactor have confirmed that the Three-Bean-Salad'' control algorithm based on the Pontryagin maximum principle can change the power of a nuclear reactor many decades with a very fast startup rate and minimal overshoot. The paper describes the results of simulations and operations up to 25 MW and 87 decades per minute.
Operational performance of the three bean salad control algorithm on the ACRR
NASA Astrophysics Data System (ADS)
Ball, Russell M.; Madaras, John J.; Trowbridge, F. Ray; Talley, Darren G.; Parma, Edward J.
1991-01-01
Experimental tests on the Annular Core Research Reactor have confirmed that the ``Three-Bean-Salad'' control algorithm based on the Pontryagin maximum principle can change the power of a nuclear reactor many decades with a very fast startup rate and minimal overshoot. The paper describes the results of simulations and operations up to 25 MW and 87 decades per minute.
Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.
1997-01-01
The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.
Optimal Decentralized Protocol for Electric Vehicle Charging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gan, LW; Topcu, U; Low, SH
We propose a decentralized algorithm to optimally schedule electric vehicle (EV) charging. The algorithm exploits the elasticity of electric vehicle loads to fill the valleys in electric load profiles. We first formulate the EV charging scheduling problem as an optimal control problem, whose objective is to impose a generalized notion of valley-filling, and study properties of optimal charging profiles. We then give a decentralized algorithm to iteratively solve the optimal control problem. In each iteration, EVs update their charging profiles according to the control signal broadcast by the utility company, and the utility company alters the control signal to guidemore » their updates. The algorithm converges to optimal charging profiles (that are as "flat" as they can possibly be) irrespective of the specifications (e.g., maximum charging rate and deadline) of EVs, even if EVs do not necessarily update their charging profiles in every iteration, and use potentially outdated control signal when they update. Moreover, the algorithm only requires each EV solving its local problem, hence its implementation requires low computation capability. We also extend the algorithm to track a given load profile and to real-time implementation.« less
NASA Astrophysics Data System (ADS)
Frassinetti, L.; Olofsson, K. E. J.; Brunsell, P. R.; Drake, J. R.
2011-06-01
The EXTRAP T2R feedback system (active coils, sensor coils and controller) is used to study and develop new tools for advanced control of the MHD instabilities in fusion plasmas. New feedback algorithms developed in EXTRAP T2R reversed-field pinch allow flexible and independent control of each magnetic harmonic. Methods developed in control theory and applied to EXTRAP T2R allow a closed-loop identification of the machine plant and of the resistive wall modes growth rates. The plant identification is the starting point for the development of output-tracking algorithms which enable the generation of external magnetic perturbations. These algorithms will then be used to study the effect of a resonant magnetic perturbation (RMP) on the tearing mode (TM) dynamics. It will be shown that the stationary RMP can induce oscillations in the amplitude and jumps in the phase of the rotating TM. It will be shown that the RMP strongly affects the magnetic island position.
Wang, Jie-Sheng; Han, Shuang
2015-01-01
For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:26583034
The design of digital-adaptive controllers for VTOL aircraft
NASA Technical Reports Server (NTRS)
Stengel, R. F.; Broussard, J. R.; Berry, P. W.
1976-01-01
Design procedures for VTOL automatic control systems have been developed and are presented. Using linear-optimal estimation and control techniques as a starting point, digital-adaptive control laws have been designed for the VALT Research Aircraft, a tandem-rotor helicopter which is equipped for fully automatic flight in terminal area operations. These control laws are designed to interface with velocity-command and attitude-command guidance logic, which could be used in short-haul VTOL operations. Developments reported here include new algorithms for designing non-zero-set-point digital regulators, design procedures for rate-limited systems, and algorithms for dynamic control trim setting.
Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis.
Wei, Qinglai; Lewis, Frank L; Sun, Qiuye; Yan, Pengfei; Song, Ruizhuo
2017-05-01
In this paper, a novel discrete-time deterministic Q -learning algorithm is developed. In each iteration of the developed Q -learning algorithm, the iterative Q function is updated for all the state and control spaces, instead of updating for a single state and a single control in traditional Q -learning algorithm. A new convergence criterion is established to guarantee that the iterative Q function converges to the optimum, where the convergence criterion of the learning rates for traditional Q -learning algorithms is simplified. During the convergence analysis, the upper and lower bounds of the iterative Q function are analyzed to obtain the convergence criterion, instead of analyzing the iterative Q function itself. For convenience of analysis, the convergence properties for undiscounted case of the deterministic Q -learning algorithm are first developed. Then, considering the discounted factor, the convergence criterion for the discounted case is established. Neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of the deterministic Q -learning algorithm. Finally, simulation results and comparisons are given to illustrate the performance of the developed algorithm.
Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1
NASA Technical Reports Server (NTRS)
Park, Thomas; Smith, Austin; Oliver, T. Emerson
2018-01-01
The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.
Modular Aero-Propulsion System Simulation
NASA Technical Reports Server (NTRS)
Parker, Khary I.; Guo, Ten-Huei
2006-01-01
The Modular Aero-Propulsion System Simulation (MAPSS) is a graphical simulation environment designed for the development of advanced control algorithms and rapid testing of these algorithms on a generic computational model of a turbofan engine and its control system. MAPSS is a nonlinear, non-real-time simulation comprising a Component Level Model (CLM) module and a Controller-and-Actuator Dynamics (CAD) module. The CLM module simulates the dynamics of engine components at a sampling rate of 2,500 Hz. The controller submodule of the CAD module simulates a digital controller, which has a typical update rate of 50 Hz. The sampling rate for the actuators in the CAD module is the same as that of the CLM. MAPSS provides a graphical user interface that affords easy access to engine-operation, engine-health, and control parameters; is used to enter such input model parameters as power lever angle (PLA), Mach number, and altitude; and can be used to change controller and engine parameters. Output variables are selectable by the user. Output data as well as any changes to constants and other parameters can be saved and reloaded into the GUI later.
A dual method for optimal control problems with initial and final boundary constraints.
NASA Technical Reports Server (NTRS)
Pironneau, O.; Polak, E.
1973-01-01
This paper presents two new algorithms belonging to the family of dual methods of centers. The first can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states. The second one can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states and with affine instantaneous inequality constraints on the control. Convergence is established for both algorithms. Qualitative reasoning indicates that the rate of convergence is linear.
NASA Astrophysics Data System (ADS)
Schämann, M.; Bücker, M.; Hessel, S.; Langmann, U.
2008-05-01
High data rates combined with high mobility represent a challenge for the design of cellular devices. Advanced algorithms are required which result in higher complexity, more chip area and increased power consumption. However, this contrasts to the limited power supply of mobile devices. This presentation discusses the application of an HSDPA receiver which has been optimized regarding power consumption with the focus on the algorithmic and architectural level. On algorithmic level the Rake combiner, Prefilter-Rake equalizer and MMSE equalizer are compared regarding their BER performance. Both equalizer approaches provide a significant increase of performance for high data rates compared to the Rake combiner which is commonly used for lower data rates. For both equalizer approaches several adaptive algorithms are available which differ in complexity and convergence properties. To identify the algorithm which achieves the required performance with the lowest power consumption the algorithms have been investigated using SystemC models regarding their performance and arithmetic complexity. Additionally, for the Prefilter Rake equalizer the power estimations of a modified Griffith (LMS) and a Levinson (RLS) algorithm have been compared with the tool ORINOCO supplied by ChipVision. The accuracy of this tool has been verified with a scalable architecture of the UMTS channel estimation described both in SystemC and VHDL targeting a 130 nm CMOS standard cell library. An architecture combining all three approaches combined with an adaptive control unit is presented. The control unit monitors the current condition of the propagation channel and adjusts parameters for the receiver like filter size and oversampling ratio to minimize the power consumption while maintaining the required performance. The optimization strategies result in a reduction of the number of arithmetic operations up to 70% for single components which leads to an estimated power reduction of up to 40% while the BER performance is not affected. This work utilizes SystemC and ORINOCO for the first estimation of power consumption in an early step of the design flow. Thereby algorithms can be compared in different operating modes including the effects of control units. Here an algorithm having higher peak complexity and power consumption but providing more flexibility showed less consumption for normal operating modes compared to the algorithm which is optimized for peak performance.
Filtered-x generalized mixed norm (FXGMN) algorithm for active noise control
NASA Astrophysics Data System (ADS)
Song, Pucha; Zhao, Haiquan
2018-07-01
The standard adaptive filtering algorithm with a single error norm exhibits slow convergence rate and poor noise reduction performance under specific environments. To overcome this drawback, a filtered-x generalized mixed norm (FXGMN) algorithm for active noise control (ANC) system is proposed. The FXGMN algorithm is developed by using a convex mixture of lp and lq norms as the cost function that it can be viewed as a generalized version of the most existing adaptive filtering algorithms, and it will reduce to a specific algorithm by choosing certain parameters. Especially, it can be used to solve the ANC under Gaussian and non-Gaussian noise environments (including impulsive noise with symmetric α -stable (SαS) distribution). To further enhance the algorithm performance, namely convergence speed and noise reduction performance, a convex combination of the FXGMN algorithm (C-FXGMN) is presented. Moreover, the computational complexity of the proposed algorithms is analyzed, and a stability condition for the proposed algorithms is provided. Simulation results show that the proposed FXGMN and C-FXGMN algorithms can achieve better convergence speed and higher noise reduction as compared to other existing algorithms under various noise input conditions, and the C-FXGMN algorithm outperforms the FXGMN.
Nonuniformity correction for an infrared focal plane array based on diamond search block matching.
Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian
2016-05-01
In scene-based nonuniformity correction algorithms, artificial ghosting and image blurring degrade the correction quality severely. In this paper, an improved algorithm based on the diamond search block matching algorithm and the adaptive learning rate is proposed. First, accurate transform pairs between two adjacent frames are estimated by the diamond search block matching algorithm. Then, based on the error between the corresponding transform pairs, the gradient descent algorithm is applied to update correction parameters. During the process of gradient descent, the local standard deviation and a threshold are utilized to control the learning rate to avoid the accumulation of matching error. Finally, the nonuniformity correction would be realized by a linear model with updated correction parameters. The performance of the proposed algorithm is thoroughly studied with four real infrared image sequences. Experimental results indicate that the proposed algorithm can reduce the nonuniformity with less ghosting artifacts in moving areas and can also overcome the problem of image blurring in static areas.
McLean, Gary; Martin, Julie Langan; Martin, Daniel J; Guthrie, Bruce; Mercer, Stewart W; Smith, Daniel J
2014-10-01
Schizophrenia is associated with increased cardiovascular mortality. Although cardiovascular disease (CVD) risk prediction algorithms are widely in the general population, their utility for patients with schizophrenia is unknown. A primary care dataset was used to compare CVD risk scores (Joint British Societies (JBS) score), cardiovascular risk factors, rates of pre-existing CVD and age of first diagnosis of CVD for schizophrenia (n=1997) relative to population controls (n=215,165). Pre-existing rates of CVD and the recording of risk factors for those without CVD were higher in the schizophrenia cohort in the younger age groups, for both genders. Those with schizophrenia were more likely to have a first diagnosis of CVD at a younger age, with nearly half of men with schizophrenia plus CVD diagnosed under the age of 55 (schizophrenia men 46.1% vs. control men 34.8%, p<0.001; schizophrenia women 28.9% vs. control women 23.8%, p<0.001). However, despite high rates of CVD risk factors within the schizophrenia group, only a very small percentage (3.2% of men and 7.5% of women) of those with schizophrenia under age 55 were correctly identified as high risk for CVD according to the JBS risk algorithm. The JBS2 risk score identified only a small proportion of individuals with schizophrenia under the age of 55 as being at high risk of CVD, despite high rates of risk factors and high rates of first diagnosis of CVD within this age group. The validity of CVD risk prediction algorithms for schizophrenia needs further research. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth S.; Paradiso, Joseph; Bergmann, Edward V.; Rowell, Derek
1990-01-01
Two steering laws are presented for single-gimbal control moment gyroscopes. An approach using the Moore-Penrose pseudoinverse with a nondirectional null-motion algorithm is shown by example to avoid internal singularities for unidirectional torque commands, for which existing algorithms fail. Because this is still a tangent-based approach, however, singularity avoidance cannot be guaranteed. The singularity robust inverse is introduced as an alternative to the pseudoinverse for computing torque-producing gimbal rates near singular states. This approach, coupled with the nondirectional null algorithm, is shown by example to provide better steering law performance by allowing torque errors to be produced in the vicinity of singular states.
Algorithms for a Closed-Loop Artificial Pancreas: The Case for Model Predictive Control
Bequette, B. Wayne
2013-01-01
The relative merits of model predictive control (MPC) and proportional-integral-derivative (PID) control are discussed, with the end goal of a closed-loop artificial pancreas (AP). It is stressed that neither MPC nor PID are single algorithms, but rather are approaches or strategies that may be implemented very differently by different engineers. The primary advantages to MPC are that (i) constraints on the insulin delivery rate (and/or insulin on board) can be explicitly included in the control calculation; (ii) it is a general framework that makes it relatively easy to include the effect of meals, exercise, and other events that are a function of the time of day; and (iii) it is flexible enough to include many different objectives, from set-point tracking (target) to zone (control to range). In the end, however, it is recognized that the control algorithm, while important, represents only a portion of the effort required to develop a closed-loop AP. Thus, any number of algorithms/approaches can be successful—the engineers involved in the design must have experience with the particular technique, including the important experience of implementing the algorithm in human studies and not simply through simulation studies. PMID:24351190
NASA Technical Reports Server (NTRS)
Swanson, T. D.; Ollendorf, S.
1979-01-01
This paper addresses the potential for enhanced solar system performance through sophisticated control of the collector loop flow rate. Computer simulations utilizing the TRNSYS solar energy program were performed to study the relative effect on system performance of eight specific control algorithms. Six of these control algorithms are of the proportional type: two are concave exponentials, two are simple linear functions, and two are convex exponentials. These six functions are typical of what might be expected from future, more advanced, controllers. The other two algorithms are of the on/off type and are thus typical of existing control devices. Results of extensive computer simulations utilizing actual weather data indicate that proportional control does not significantly improve system performance. However, it is shown that thermal stratification in the liquid storage tank may significantly improve performance.
Mohammadi-Abdar, Hassan; Ridgel, Angela L.; Discenzo, Fred M.; Loparo, Kenneth A.
2016-01-01
Recent studies in rehabilitation of Parkinson’s disease (PD) have shown that cycling on a tandem bike at a high pedaling rate can reduce the symptoms of the disease. In this research, a smart motorized bicycle has been designed and built for assisting Parkinson’s patients with exercise to improve motor function. The exercise bike can accurately control the rider’s experience at an accelerated pedaling rate while capturing real-time test data. Here, the design and development of the electronics and hardware as well as the software and control algorithms are presented. Two control algorithms have been developed for the bike; one that implements an inertia load (static mode) and one that implements a speed reference (dynamic mode). In static mode the bike operates as a regular exercise bike with programmable resistance (load) that captures and records the required signals such as heart rate, cadence and power. In dynamic mode the bike operates at a user-selected speed (cadence) with programmable variability in speed that has been shown to be essential to achieving the desired motor performance benefits for PD patients. In addition, the flexible and extensible design of the bike permits readily changing the control algorithm and incorporating additional I/O as needed to provide a wide range of riding experiences. Furthermore, the network-enabled controller provides remote access to bike data during a riding session. PMID:27298575
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ball, R.M.; Madaras, J.J.; Trowbridge, F.R. Jr.
Experimental tests on the Annular Core Research Reactor have confirmed that the Three-Bean-Salad'' control algorithm based on the Pontryagin maximum principle can change the power of a nuclear reactor many decades with a very fast startup rate and minimal overshoot. The paper describes the results of simulations and operations up to 25 MW and 87 decades per minute. 3 refs., 4 figs., 1 tab.
Baur, Kilian; Wolf, Peter; Riener, Robert; Duarte, Jaime E
2017-07-01
Multiplayer environments are thought to increase the training intensity in robot-aided rehabilitation therapy after stroke. We developed a haptic-based environment to investigate the dynamics of two-player training performing time-constrained reaching movements using the ARMin rehabilitation robot. We implemented a challenge level adaptation algorithm that controlled a virtual damping coefficient to reach a desired success rate. We tested the algorithm's effectiveness in regulating the success rate during game play in a simulation with computer-controlled players, in a feasibility study with six unimpaired players, and in a single session with one stroke patient. The algorithm demonstrated its capacity to adjust the damping coefficient to reach three levels of success rate (low [50%], moderate [70%], and high [90%]) during singleplayer and multiplayer training. For the patient - tested in single-player mode at the moderate success rate only - the algorithm showed also promising behavior. Results of the feasibility study showed that to increase the player's willingness to play at a more challenging task condition, the effect of the challenge level adaptation - regardless of being played in single player or multiplayer mode - might be more important than the provision of multiplayer setting alone. Furthermore, the multiplayer setting tends to be a motivating and encouraging therapy component. Based on these results we will optimize and expand the multiplayer training platform and further investigate multiplayer settings in stroke therapy.
Testing of Gyroless Estimation Algorithms for the Fuse Spacecraft
NASA Technical Reports Server (NTRS)
Harman, R.; Thienel, J.; Oshman, Yaakov
2004-01-01
This paper documents the testing and development of magnetometer-based gyroless attitude and rate estimation algorithms for the Far Ultraviolet Spectroscopic Explorer (FUSE). The results of two approaches are presented, one relies on a kinematic model for propagation, a method used in aircraft tracking, and the other is a pseudolinear Kalman filter that utilizes Euler's equations in the propagation of the estimated rate. Both algorithms are tested using flight data collected over a few months after the failure of two of the reaction wheels. The question of closed-loop stability is addressed. The ability of the controller to meet the science slew requirements, without the gyros, is analyzed.
A Polygon Model for Wireless Sensor Network Deployment with Directional Sensing Areas
Wu, Chun-Hsien; Chung, Yeh-Ching
2009-01-01
The modeling of the sensing area of a sensor node is essential for the deployment algorithm of wireless sensor networks (WSNs). In this paper, a polygon model is proposed for the sensor node with directional sensing area. In addition, a WSN deployment algorithm is presented with topology control and scoring mechanisms to maintain network connectivity and improve sensing coverage rate. To evaluate the proposed polygon model and WSN deployment algorithm, a simulation is conducted. The simulation results show that the proposed polygon model outperforms the existed disk model and circular sector model in terms of the maximum sensing coverage rate. PMID:22303159
Real-time Adaptive Control Using Neural Generalized Predictive Control
NASA Technical Reports Server (NTRS)
Haley, Pam; Soloway, Don; Gold, Brian
1999-01-01
The objective of this paper is to demonstrate the feasibility of a Nonlinear Generalized Predictive Control algorithm by showing real-time adaptive control on a plant with relatively fast time-constants. Generalized Predictive Control has classically been used in process control where linear control laws were formulated for plants with relatively slow time-constants. The plant of interest for this paper is a magnetic levitation device that is nonlinear and open-loop unstable. In this application, the reference model of the plant is a neural network that has an embedded nominal linear model in the network weights. The control based on the linear model provides initial stability at the beginning of network training. In using a neural network the control laws are nonlinear and online adaptation of the model is possible to capture unmodeled or time-varying dynamics. Newton-Raphson is the minimization algorithm. Newton-Raphson requires the calculation of the Hessian, but even with this computational expense the low iteration rate make this a viable algorithm for real-time control.
NASA Astrophysics Data System (ADS)
Ousaloo, H. S.; Nodeh, M. T.; Mehrabian, R.
2016-09-01
This paper accomplishes one goal and it was to verify and to validate a Spin Magnetic Attitude Control System (SMACS) program and to perform Hardware-In-the-Loop (HIL) air-bearing experiments. A study of a closed-loop magnetic spin controller is presented using only magnetic rods as actuators. The magnetic spin rate control approach is able to perform spin rate control and it is verified with an Attitude Control System (ACS) air-bearing MATLAB® SIMULINK® model and a hardware-embedded LABVIEW® algorithm that controls the spin rate of the test platform on a spherical air bearing table. The SIMULINK® model includes dynamic model of air-bearing, its disturbances, actuator emulation and the time delays caused by on-board calculations. The air-bearing simulator is employed to develop, improve, and carry out objective tests of magnetic torque rods and spin rate control algorithm in the experimental framework and to provide a more realistic demonstration of expected performance of attitude control as compared with software-based architectures. Six sets of two torque rods are used as actuators for the SMACS. It is implemented and simulated to fulfill mission requirement including spin the satellite up to 12 degs-1 around the z-axis. These techniques are documented for the full nonlinear equations of motion of the system and the performances of these techniques are compared in several simulations.
Measurement of Solid Rocket Propellant Burning Rate Using X-ray Imaging
NASA Astrophysics Data System (ADS)
Denny, Matthew D.
The burning rate of solid propellants can be difficult to measure for unusual burning surface geometries, but X-ray imaging can be used to measure burning rate. The objectives of this work were to measure the baseline burning rate of an electrically-controlled solid propellant (ESP) formulation with real-time X-ray radiography and to determine the uncertainty of the measurements. Two edge detection algorithms were written to track the burning surface in X-ray videos. The edge detection algorithms were informed by intensity profiles of simulated 2-D X-ray images. With a 95% confidence level, the burning rates measured by the Projected-Slope Intersection algorithm in the two combustion experiments conducted were 0.0839 in/s +/-2.86% at an average pressure of 407 psi +/-3.6% and 0.0882 in/s +/-3.04% at 410 psi +/-3.9%. The uncertainty percentages were based on the statistics of a Monte Carlo analysis on burning rate.
Spacecraft Attitude Tracking and Maneuver Using Combined Magnetic Actuators
NASA Technical Reports Server (NTRS)
Zhou, Zhiqiang
2010-01-01
The accuracy of spacecraft attitude control using magnetic actuators only is low and on the order of 0.4-5 degrees. The key reason is that the magnetic torque is two-dimensional and it is only in the plane perpendicular to the magnetic field vector. In this paper novel attitude control algorithms using the combination of magnetic actuators with Reaction Wheel Assembles (RWAs) or other types of actuators, such as thrusters, are presented. The combination of magnetic actuators with one or two RWAs aligned with different body axis expands the two-dimensional control torque to three-dimensional. The algorithms can guarantee the spacecraft attitude and rates to track the commanded attitude precisely. A design example is presented for Nadir pointing, pitch and yaw maneuvers. The results show that precise attitude tracking can be reached and the attitude control accuracy is comparable with RWAs based attitude control. The algorithms are also useful for the RWAs based attitude control. When there are only one or two workable RWAs due to RWA failures, the attitude control system can switch to the control algorithms for the combined magnetic actuators with the RWAs without going to the safe mode and the control accuracy can be maintained.
Renard, P; Van Breusegem, V; Nguyen, M T; Naveau, H; Nyns, E J
1991-10-20
An adaptive control algorithm has been implemented on a biomethanation process to maintain propionate concentration, a stable variable, at a given low value, by steering the dilution rate. It was thereby expected to ensure the stability of the process during the startup and during steady-state running with an acceptable performance. The methane pilot reactor was operated in the completely mixed, once-through mode and computer-controlled during 161 days. The results yielded the real-life validation of the adaptive control algorithm, and documented the stability and acceptable performance expected.
Quick Vegas: Improving Performance of TCP Vegas for High Bandwidth-Delay Product Networks
NASA Astrophysics Data System (ADS)
Chan, Yi-Cheng; Lin, Chia-Liang; Ho, Cheng-Yuan
An important issue in designing a TCP congestion control algorithm is that it should allow the protocol to quickly adjust the end-to-end communication rate to the bandwidth on the bottleneck link. However, the TCP congestion control may function poorly in high bandwidth-delay product networks because of its slow response with large congestion windows. In this paper, we propose an enhanced version of TCP Vegas called Quick Vegas, in which we present an efficient congestion window control algorithm for a TCP source. Our algorithm improves the slow-start and congestion avoidance techniques of original Vegas. Simulation results show that Quick Vegas significantly improves the performance of connections as well as remaining fair when the bandwidth-delay product increases.
Youssef, Joseph El; Castle, Jessica R; Branigan, Deborah L; Massoud, Ryan G; Breen, Matthew E; Jacobs, Peter G; Bequette, B Wayne; Ward, W Kenneth
2011-01-01
To be effective in type 1 diabetes, algorithms must be able to limit hyperglycemic excursions resulting from medical and emotional stress. We tested an algorithm that estimates insulin sensitivity at regular intervals and continually adjusts gain factors of a fading memory proportional-derivative (FMPD) algorithm. In order to assess whether the algorithm could appropriately adapt and limit the degree of hyperglycemia, we administered oral hydrocortisone repeatedly to create insulin resistance. We compared this indirect adaptive proportional-derivative (APD) algorithm to the FMPD algorithm, which used fixed gain parameters. Each subject with type 1 diabetes (n = 14) was studied on two occasions, each for 33 h. The APD algorithm consistently identified a fall in insulin sensitivity after hydrocortisone. The gain factors and insulin infusion rates were appropriately increased, leading to satisfactory glycemic control after adaptation (premeal glucose on day 2, 148 ± 6 mg/dl). After sufficient time was allowed for adaptation, the late postprandial glucose increment was significantly lower than when measured shortly after the onset of the steroid effect. In addition, during the controlled comparison, glycemia was significantly lower with the APD algorithm than with the FMPD algorithm. No increase in hypoglycemic frequency was found in the APD-only arm. An afferent system of duplicate amperometric sensors demonstrated a high degree of accuracy; the mean absolute relative difference of the sensor used to control the algorithm was 9.6 ± 0.5%. We conclude that an adaptive algorithm that frequently estimates insulin sensitivity and adjusts gain factors is capable of minimizing corticosteroid-induced stress hyperglycemia. PMID:22226248
NASA Technical Reports Server (NTRS)
Seltzer, S. M.
1976-01-01
The problem discussed is to design a digital controller for a typical satellite. The controlled plant is considered to be a rigid body acting in a plane. The controller is assumed to be a digital computer which, when combined with the proposed control algorithm, can be represented as a sampled-data system. The objective is to present a design strategy and technique for selecting numerical values for the control gains (assuming position, integral, and derivative feedback) and the sample rate. The technique is based on the parameter plane method and requires that the system be amenable to z-transform analysis.
Outdoor flocking of quadcopter drones with decentralized model predictive control.
Yuan, Quan; Zhan, Jingyuan; Li, Xiang
2017-11-01
In this paper, we present a multi-drone system featured with a decentralized model predictive control (DMPC) flocking algorithm. The drones gather localized information from neighbors and update their velocities using the DMPC flocking algorithm. In the multi-drone system, data packages are transmitted through XBee ® wireless modules in broadcast mode, yielding such an anonymous and decentralized system where all the calculations and controls are completed on an onboard minicomputer of each drone. Each drone is a double-layered agent system with the coordination layer running multi-drone flocking algorithms and the flight control layer navigating the drone, and the final formation of the flock relies on both the communication range and the desired inter-drone distance. We give both numerical simulations and field tests with a flock of five drones, showing that the DMPC flocking algorithm performs well on the presented multi-drone system in both the convergence rate and the ability of tracking a desired path. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Multi-Objective Control Optimization for Greenhouse Environment Using Evolutionary Algorithms
Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun
2011-01-01
This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production. PMID:22163927
NASA Technical Reports Server (NTRS)
Lowrie, J. W.; Fermelia, A. J.; Haley, D. C.; Gremban, K. D.; Vanbaalen, J.; Walsh, R. W.
1982-01-01
The derivation of the equations is presented, the rate control algorithm described, and simulation methodologies summarized. A set of dynamics equations that can be used recursively to calculate forces and torques acting at the joints of an n link manipulator given the manipulator joint rates are derived. The equations are valid for any n link manipulator system with any kind of joints connected in any sequence. The equations of motion for the class of manipulators consisting of n rigid links interconnected by rotary joints are derived. A technique is outlined for reducing the system of equations to eliminate contraint torques. The linearized dynamics equations for an n link manipulator system are derived. The general n link linearized equations are then applied to a two link configuration. The coordinated rate control algorithm used to compute individual joint rates when given end effector rates is described. A short discussion of simulation methodologies is presented.
Inflight redesign of the IUE attitude control system
NASA Technical Reports Server (NTRS)
Femiano, M. D.
1986-01-01
The one- and two-gyro system designs of the International Ultraviolet Explorer (IUE) attitude control system (ACS) are examined. The inertial reference assembly that provides the primary attitude reference for IUE consists of six rate sensors which are single-axis rate integrating gyros. The gyros operate in a pulse rebalanced mode that produces an output pulse for 0.01 arcsec of motion about the input axis. The functions of the fine error sensor, fine sun sensor (FSS), the IUE reaction wheels, the onboard computer, and the hold/slew algorithm are described. The use of the hold/slew algorithm to compute the control voltage for the ACS based on the Kalman filter is studied. A two-gyro system was incorporated into IUE following gyro failure. The procedures for establishing attitude control with the two-gyro design based on the FSS is analyzed. The performance of the two-gyro system is evaluated; it is observed that the pitch and yaw gyro control is 0.24 arcsec and the control is sufficient to permit extended periods of observation.
Bounded Kalman filter method for motion-robust, non-contact heart rate estimation
Prakash, Sakthi Kumar Arul; Tucker, Conrad S.
2018-01-01
The authors of this work present a real-time measurement of heart rate across different lighting conditions and motion categories. This is an advancement over existing remote Photo Plethysmography (rPPG) methods that require a static, controlled environment for heart rate detection, making them impractical for real-world scenarios wherein a patient may be in motion, or remotely connected to a healthcare provider through telehealth technologies. The algorithm aims to minimize motion artifacts such as blurring and noise due to head movements (uniform, random) by employing i) a blur identification and denoising algorithm for each frame and ii) a bounded Kalman filter technique for motion estimation and feature tracking. A case study is presented that demonstrates the feasibility of the algorithm in non-contact estimation of the pulse rate of subjects performing everyday head and body movements. The method in this paper outperforms state of the art rPPG methods in heart rate detection, as revealed by the benchmarked results. PMID:29552419
A machine learning-based framework to identify type 2 diabetes through electronic health records
Zheng, Tao; Xie, Wei; Xu, Liling; He, Xiaoying; Zhang, Ya; You, Mingrong; Yang, Gong; Chen, You
2016-01-01
Objective To discover diverse genotype-phenotype associations affiliated with Type 2 Diabetes Mellitus (T2DM) via genome-wide association study (GWAS) and phenome-wide association study (PheWAS), more cases (T2DM subjects) and controls (subjects without T2DM) are required to be identified (e.g., via Electronic Health Records (EHR)). However, existing expert based identification algorithms often suffer in a low recall rate and could miss a large number of valuable samples under conservative filtering standards. The goal of this work is to develop a semi-automated framework based on machine learning as a pilot study to liberalize filtering criteria to improve recall rate with a keeping of low false positive rate. Materials and methods We propose a data informed framework for identifying subjects with and without T2DM from EHR via feature engineering and machine learning. We evaluate and contrast the identification performance of widely-used machine learning models within our framework, including k-Nearest-Neighbors, Naïve Bayes, Decision Tree, Random Forest, Support Vector Machine and Logistic Regression. Our framework was conducted on 300 patient samples (161 cases, 60 controls and 79 unconfirmed subjects), randomly selected from 23,281 diabetes related cohort retrieved from a regional distributed EHR repository ranging from 2012 to 2014. Results We apply top-performing machine learning algorithms on the engineered features. We benchmark and contrast the accuracy, precision, AUC, sensitivity and specificity of classification models against the state-of-the-art expert algorithm for identification of T2DM subjects. Our results indicate that the framework achieved high identification performances (∼0.98 in average AUC), which are much higher than the state-of-the-art algorithm (0.71 in AUC). Discussion Expert algorithm-based identification of T2DM subjects from EHR is often hampered by the high missing rates due to their conservative selection criteria. Our framework leverages machine learning and feature engineering to loosen such selection criteria to achieve a high identification rate of cases and controls. Conclusions Our proposed framework demonstrates a more accurate and efficient approach for identifying subjects with and without T2DM from EHR. PMID:27919371
A machine learning-based framework to identify type 2 diabetes through electronic health records.
Zheng, Tao; Xie, Wei; Xu, Liling; He, Xiaoying; Zhang, Ya; You, Mingrong; Yang, Gong; Chen, You
2017-01-01
To discover diverse genotype-phenotype associations affiliated with Type 2 Diabetes Mellitus (T2DM) via genome-wide association study (GWAS) and phenome-wide association study (PheWAS), more cases (T2DM subjects) and controls (subjects without T2DM) are required to be identified (e.g., via Electronic Health Records (EHR)). However, existing expert based identification algorithms often suffer in a low recall rate and could miss a large number of valuable samples under conservative filtering standards. The goal of this work is to develop a semi-automated framework based on machine learning as a pilot study to liberalize filtering criteria to improve recall rate with a keeping of low false positive rate. We propose a data informed framework for identifying subjects with and without T2DM from EHR via feature engineering and machine learning. We evaluate and contrast the identification performance of widely-used machine learning models within our framework, including k-Nearest-Neighbors, Naïve Bayes, Decision Tree, Random Forest, Support Vector Machine and Logistic Regression. Our framework was conducted on 300 patient samples (161 cases, 60 controls and 79 unconfirmed subjects), randomly selected from 23,281 diabetes related cohort retrieved from a regional distributed EHR repository ranging from 2012 to 2014. We apply top-performing machine learning algorithms on the engineered features. We benchmark and contrast the accuracy, precision, AUC, sensitivity and specificity of classification models against the state-of-the-art expert algorithm for identification of T2DM subjects. Our results indicate that the framework achieved high identification performances (∼0.98 in average AUC), which are much higher than the state-of-the-art algorithm (0.71 in AUC). Expert algorithm-based identification of T2DM subjects from EHR is often hampered by the high missing rates due to their conservative selection criteria. Our framework leverages machine learning and feature engineering to loosen such selection criteria to achieve a high identification rate of cases and controls. Our proposed framework demonstrates a more accurate and efficient approach for identifying subjects with and without T2DM from EHR. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, S; Zhang, H; Zhang, B
2015-06-15
Purpose: To clinically evaluate the differences in volumetric modulated arc therapy (VMAT) treatment plan and delivery between two commercial treatment planning systems. Methods: Two commercial VMAT treatment planning systems with different VMAT optimization algorithms and delivery approaches were evaluated. This study included 16 clinical VMAT plans performed with the first system: 2 spine, 4 head and neck (HN), 2 brain, 4 pancreas, and 4 pelvis plans. These 16 plans were then re-optimized with the same number of arcs using the second treatment planning system. Planning goals were invariant between the two systems. Gantry speed, dose rate modulation, MLC modulation, planmore » quality, number of monitor units (MUs), VMAT quality assurance (QA) results, and treatment delivery time were compared between the 2 systems. VMAT QA results were performed using Mapcheck2 and analyzed with gamma analysis (3mm/3% and 2mm/2%). Results: Similar plan quality was achieved with each VMAT optimization algorithm, and the difference in delivery time was minimal. Algorithm 1 achieved planning goals by highly modulating the MLC (total distance traveled by leaves (TL) = 193 cm average over control points per plan), while maintaining a relatively constant dose rate (dose-rate change <100 MU/min). Algorithm 2 involved less MLC modulation (TL = 143 cm per plan), but greater dose-rate modulation (range = 0-600 MU/min). The average number of MUs was 20% less for algorithm 2 (ratio of MUs for algorithms 2 and 1 ranged from 0.5-1). VMAT QA results were similar for all disease sites except HN plans. For HN plans, the average gamma passing rates were 88.5% (2mm/2%) and 96.9% (3mm/3%) for algorithm 1 and 97.9% (2mm/2%) and 99.6% (3mm/3%) for algorithm 2. Conclusion: Both VMAT optimization algorithms achieved comparable plan quality; however, fewer MUs were needed and QA results were more robust for Algorithm 2, which more highly modulated dose rate.« less
$$\\mathscr{H}_2$$ optimal control techniques for resistive wall mode feedback in tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clement, Mitchell; Hanson, Jeremy; Bialek, Jim
DIII-D experiments show that a new, advanced algorithm improves resistive wall mode (RWM) stability control in high performance discharges using external coils. DIII-D can excite strong, locked or nearly locked external kink modes whose rotation frequencies and growth rates are on the order of the magnetic ux di usion time of the vacuum vessel wall. The VALEN RWM model has been used to gauge the e ectiveness of RWM control algorithms in tokamaks. Simulations and experiments have shown that modern control techniques like Linear Quadratic Gaussian (LQG) control will perform better, using 77% less current, than classical techniques when usingmore » control coils external to DIII-D's vacuum vessel. Experiments were conducted to develop control of a rotating n = 1 perturbation using an LQG controller derived from VALEN and external coils. Feedback using this LQG algorithm outperformed a proportional gain only controller in these perturbation experiments over a range of frequencies. Results from high N experiments also show that advanced feedback techniques using external control coils may be as e ective as internal control coil feedback using classical control techniques.« less
$$\\mathscr{H}_2$$ optimal control techniques for resistive wall mode feedback in tokamaks
Clement, Mitchell; Hanson, Jeremy; Bialek, Jim; ...
2018-02-28
DIII-D experiments show that a new, advanced algorithm improves resistive wall mode (RWM) stability control in high performance discharges using external coils. DIII-D can excite strong, locked or nearly locked external kink modes whose rotation frequencies and growth rates are on the order of the magnetic ux di usion time of the vacuum vessel wall. The VALEN RWM model has been used to gauge the e ectiveness of RWM control algorithms in tokamaks. Simulations and experiments have shown that modern control techniques like Linear Quadratic Gaussian (LQG) control will perform better, using 77% less current, than classical techniques when usingmore » control coils external to DIII-D's vacuum vessel. Experiments were conducted to develop control of a rotating n = 1 perturbation using an LQG controller derived from VALEN and external coils. Feedback using this LQG algorithm outperformed a proportional gain only controller in these perturbation experiments over a range of frequencies. Results from high N experiments also show that advanced feedback techniques using external control coils may be as e ective as internal control coil feedback using classical control techniques.« less
Friesen, Melissa C.
2013-01-01
Objectives: Algorithm-based exposure assessments based on patterns in questionnaire responses and professional judgment can readily apply transparent exposure decision rules to thousands of jobs quickly. However, we need to better understand how algorithms compare to a one-by-one job review by an exposure assessor. We compared algorithm-based estimates of diesel exhaust exposure to those of three independent raters within the New England Bladder Cancer Study, a population-based case–control study, and identified conditions under which disparities occurred in the assessments of the algorithm and the raters. Methods: Occupational diesel exhaust exposure was assessed previously using an algorithm and a single rater for all 14 983 jobs reported by 2631 study participants during personal interviews conducted from 2001 to 2004. Two additional raters independently assessed a random subset of 324 jobs that were selected based on strata defined by the cross-tabulations of the algorithm and the first rater’s probability assessments for each job, oversampling their disagreements. The algorithm and each rater assessed the probability, intensity and frequency of occupational diesel exhaust exposure, as well as a confidence rating for each metric. Agreement among the raters, their aggregate rating (average of the three raters’ ratings) and the algorithm were evaluated using proportion of agreement, kappa and weighted kappa (κw). Agreement analyses on the subset used inverse probability weighting to extrapolate the subset to estimate agreement for all jobs. Classification and Regression Tree (CART) models were used to identify patterns in questionnaire responses that predicted disparities in exposure status (i.e., unexposed versus exposed) between the first rater and the algorithm-based estimates. Results: For the probability, intensity and frequency exposure metrics, moderate to moderately high agreement was observed among raters (κw = 0.50–0.76) and between the algorithm and the individual raters (κw = 0.58–0.81). For these metrics, the algorithm estimates had consistently higher agreement with the aggregate rating (κw = 0.82) than with the individual raters. For all metrics, the agreement between the algorithm and the aggregate ratings was highest for the unexposed category (90–93%) and was poor to moderate for the exposed categories (9–64%). Lower agreement was observed for jobs with a start year <1965 versus ≥1965. For the confidence metrics, the agreement was poor to moderate among raters (κw = 0.17–0.45) and between the algorithm and the individual raters (κw = 0.24–0.61). CART models identified patterns in the questionnaire responses that predicted a fair-to-moderate (33–89%) proportion of the disagreements between the raters’ and the algorithm estimates. Discussion: The agreement between any two raters was similar to the agreement between an algorithm-based approach and individual raters, providing additional support for using the more efficient and transparent algorithm-based approach. CART models identified some patterns in disagreements between the first rater and the algorithm. Given the absence of a gold standard for estimating exposure, these patterns can be reviewed by a team of exposure assessors to determine whether the algorithm should be revised for future studies. PMID:23184256
Differential Evolution algorithm applied to FSW model calibration
NASA Astrophysics Data System (ADS)
Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.
2014-03-01
Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.
Lin, Haifeng; Bai, Di; Gao, Demin; Liu, Yunfei
2016-01-01
In Rechargeable Wireless Sensor Networks (R-WSNs), in order to achieve the maximum data collection rate it is critical that sensors operate in very low duty cycles because of the sporadic availability of energy. A sensor has to stay in a dormant state in most of the time in order to recharge the battery and use the energy prudently. In addition, a sensor cannot always conserve energy if a network is able to harvest excessive energy from the environment due to its limited storage capacity. Therefore, energy exploitation and energy saving have to be traded off depending on distinct application scenarios. Since higher data collection rate or maximum data collection rate is the ultimate objective for sensor deployment, surplus energy of a node can be utilized for strengthening packet delivery efficiency and improving the data generating rate in R-WSNs. In this work, we propose an algorithm based on data aggregation to compute an upper data generation rate by maximizing it as an optimization problem for a network, which is formulated as a linear programming problem. Subsequently, a dual problem by introducing Lagrange multipliers is constructed, and subgradient algorithms are used to solve it in a distributed manner. At the same time, a topology controlling scheme is adopted for improving the network’s performance. Through extensive simulation and experiments, we demonstrate that our algorithm is efficient at maximizing the data collection rate in rechargeable wireless sensor networks. PMID:27483282
Lin, Haifeng; Bai, Di; Gao, Demin; Liu, Yunfei
2016-07-30
In Rechargeable Wireless Sensor Networks (R-WSNs), in order to achieve the maximum data collection rate it is critical that sensors operate in very low duty cycles because of the sporadic availability of energy. A sensor has to stay in a dormant state in most of the time in order to recharge the battery and use the energy prudently. In addition, a sensor cannot always conserve energy if a network is able to harvest excessive energy from the environment due to its limited storage capacity. Therefore, energy exploitation and energy saving have to be traded off depending on distinct application scenarios. Since higher data collection rate or maximum data collection rate is the ultimate objective for sensor deployment, surplus energy of a node can be utilized for strengthening packet delivery efficiency and improving the data generating rate in R-WSNs. In this work, we propose an algorithm based on data aggregation to compute an upper data generation rate by maximizing it as an optimization problem for a network, which is formulated as a linear programming problem. Subsequently, a dual problem by introducing Lagrange multipliers is constructed, and subgradient algorithms are used to solve it in a distributed manner. At the same time, a topology controlling scheme is adopted for improving the network's performance. Through extensive simulation and experiments, we demonstrate that our algorithm is efficient at maximizing the data collection rate in rechargeable wireless sensor networks.
Wiethoff, Katja; Baghai, Thomas C; Fisher, Robert; Seemüller, Florian; Laakmann, Gregor; Brieger, Peter; Cordes, Joachim; Malevani, Jaroslav; Laux, Gerd; Hauth, Iris; Möller, Hans-Jürgen; Kronmüller, Klaus-Thomas; Smolka, Michael N; Schlattmann, Peter; Berger, Maximilian; Ricken, Roland; Stamm, Thomas J; Heinz, Andreas; Bauer, Michael
2017-01-01
Abstract Background Treatment algorithms are considered as key to improve outcomes by enhancing the quality of care. This is the first randomized controlled study to evaluate the clinical effect of algorithm-guided treatment in inpatients with major depressive disorder. Methods Inpatients, aged 18 to 70 years with major depressive disorder from 10 German psychiatric departments were randomized to 5 different treatment arms (from 2000 to 2005), 3 of which were standardized stepwise drug treatment algorithms (ALGO). The fourth arm proposed medications and provided less specific recommendations based on a computerized documentation and expert system (CDES), the fifth arm received treatment as usual (TAU). ALGO included 3 different second-step strategies: lithium augmentation (ALGO LA), antidepressant dose-escalation (ALGO DE), and switch to a different antidepressant (ALGO SW). Time to remission (21-item Hamilton Depression Rating Scale ≤9) was the primary outcome. Results Time to remission was significantly shorter for ALGO DE (n=91) compared with both TAU (n=84) (HR=1.67; P=.014) and CDES (n=79) (HR=1.59; P=.031) and ALGO SW (n=89) compared with both TAU (HR=1.64; P=.018) and CDES (HR=1.56; P=.038). For both ALGO LA (n=86) and ALGO DE, fewer antidepressant medications were needed to achieve remission than for CDES or TAU (P<.001). Remission rates at discharge differed across groups; ALGO DE had the highest (89.2%) and TAU the lowest rates (66.2%). Conclusions A highly structured algorithm-guided treatment is associated with shorter times and fewer medication changes to achieve remission with depressed inpatients than treatment as usual or computerized medication choice guidance. PMID:28645191
Adli, Mazda; Wiethoff, Katja; Baghai, Thomas C; Fisher, Robert; Seemüller, Florian; Laakmann, Gregor; Brieger, Peter; Cordes, Joachim; Malevani, Jaroslav; Laux, Gerd; Hauth, Iris; Möller, Hans-Jürgen; Kronmüller, Klaus-Thomas; Smolka, Michael N; Schlattmann, Peter; Berger, Maximilian; Ricken, Roland; Stamm, Thomas J; Heinz, Andreas; Bauer, Michael
2017-09-01
Treatment algorithms are considered as key to improve outcomes by enhancing the quality of care. This is the first randomized controlled study to evaluate the clinical effect of algorithm-guided treatment in inpatients with major depressive disorder. Inpatients, aged 18 to 70 years with major depressive disorder from 10 German psychiatric departments were randomized to 5 different treatment arms (from 2000 to 2005), 3 of which were standardized stepwise drug treatment algorithms (ALGO). The fourth arm proposed medications and provided less specific recommendations based on a computerized documentation and expert system (CDES), the fifth arm received treatment as usual (TAU). ALGO included 3 different second-step strategies: lithium augmentation (ALGO LA), antidepressant dose-escalation (ALGO DE), and switch to a different antidepressant (ALGO SW). Time to remission (21-item Hamilton Depression Rating Scale ≤9) was the primary outcome. Time to remission was significantly shorter for ALGO DE (n=91) compared with both TAU (n=84) (HR=1.67; P=.014) and CDES (n=79) (HR=1.59; P=.031) and ALGO SW (n=89) compared with both TAU (HR=1.64; P=.018) and CDES (HR=1.56; P=.038). For both ALGO LA (n=86) and ALGO DE, fewer antidepressant medications were needed to achieve remission than for CDES or TAU (P<.001). Remission rates at discharge differed across groups; ALGO DE had the highest (89.2%) and TAU the lowest rates (66.2%). A highly structured algorithm-guided treatment is associated with shorter times and fewer medication changes to achieve remission with depressed inpatients than treatment as usual or computerized medication choice guidance. © The Author 2017. Published by Oxford University Press on behalf of CINP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gayeski, N.; Armstrong, Peter; Alvira, M.
2011-11-30
KGS Buildings LLC (KGS) and Pacific Northwest National Laboratory (PNNL) have developed a simplified control algorithm and prototype low-lift chiller controller suitable for model-predictive control in a demonstration project of low-lift cooling. Low-lift cooling is a highly efficient cooling strategy conceived to enable low or net-zero energy buildings. A low-lift cooling system consists of a high efficiency low-lift chiller, radiant cooling, thermal storage, and model-predictive control to pre-cool thermal storage overnight on an optimal cooling rate trajectory. We call the properly integrated and controlled combination of these elements a low-lift cooling system (LLCS). This document is the final report formore » that project.« less
A comparison between computer-controlled and set work rate exercise based on target heart rate
NASA Technical Reports Server (NTRS)
Pratt, Wanda M.; Siconolfi, Steven F.; Webster, Laurie; Hayes, Judith C.; Mazzocca, Augustus D.; Harris, Bernard A., Jr.
1991-01-01
Two methods are compared for observing the heart rate (HR), metabolic equivalents, and time in target HR zone (defined as the target HR + or - 5 bpm) during 20 min of exercise at a prescribed intensity of the maximum working capacity. In one method, called set-work rate exercise, the information from a graded exercise test is used to select a target HR and to calculate a corresponding constant work rate that should induce the desired HR. In the other method, the work rate is controlled by a computer algorithm to achieve and maintain a prescribed target HR. It is shown that computer-controlled exercise is an effective alternative to the traditional set work rate exercise, particularly when tight control of cardiovascular responses is necessary.
Design of Robust Adaptive Unbalance Response Controllers for Rotors with Magnetic Bearings
NASA Technical Reports Server (NTRS)
Knospe, Carl R.; Tamer, Samir M.; Fedigan, Stephen J.
1996-01-01
Experimental results have recently demonstrated that an adaptive open loop control strategy can be highly effective in the suppression of unbalance induced vibration on rotors supported in active magnetic bearings. This algorithm, however, relies upon a predetermined gain matrix. Typically, this matrix is determined by an optimal control formulation resulting in the choice of the pseudo-inverse of the nominal influence coefficient matrix as the gain matrix. This solution may result in problems with stability and performance robustness since the estimated influence coefficient matrix is not equal to the actual influence coefficient matrix. Recently, analysis tools have been developed to examine the robustness of this control algorithm with respect to structured uncertainty. Herein, these tools are extended to produce a design procedure for determining the adaptive law's gain matrix. The resulting control algorithm has a guaranteed convergence rate and steady state performance in spite of the uncertainty in the rotor system. Several examples are presented which demonstrate the effectiveness of this approach and its advantages over the standard optimal control formulation.
A Distributed Transmission Rate Adjustment Algorithm in Heterogeneous CSMA/CA Networks
Xie, Shuanglong; Low, Kay Soon; Gunawan, Erry
2015-01-01
Distributed transmission rate tuning is important for a wide variety of IEEE 802.15.4 network applications such as industrial network control systems. Such systems often require each node to sustain certain throughput demand in order to guarantee the system performance. It is thus essential to determine a proper transmission rate that can meet the application requirement and compensate for network imperfections (e.g., packet loss). Such a tuning in a heterogeneous network is difficult due to the lack of modeling techniques that can deal with the heterogeneity of the network as well as the network traffic changes. In this paper, a distributed transmission rate tuning algorithm in a heterogeneous IEEE 802.15.4 CSMA/CA network is proposed. Each node uses the results of clear channel assessment (CCA) to estimate the busy channel probability. Then a mathematical framework is developed to estimate the on-going heterogeneous traffics using the busy channel probability at runtime. Finally a distributed algorithm is derived to tune the transmission rate of each node to accurately meet the throughput requirement. The algorithm does not require modifications on IEEE 802.15.4 MAC layer and it has been experimentally implemented and extensively tested using TelosB nodes with the TinyOS protocol stack. The results reveal that the algorithm is accurate and can satisfy the throughput demand. Compared with existing techniques, the algorithm is fully distributed and thus does not require any central coordination. With this property, it is able to adapt to traffic changes and re-adjust the transmission rate to the desired level, which cannot be achieved using the traditional modeling techniques. PMID:25822140
Real-Time Control of a Video Game Using Eye Movements and Two Temporal EEG Sensors.
Belkacem, Abdelkader Nasreddine; Saetia, Supat; Zintus-art, Kalanyu; Shin, Duk; Kambara, Hiroyuki; Yoshimura, Natsue; Berrached, Nasreddine; Koike, Yasuharu
2015-01-01
EEG-controlled gaming applications range widely from strictly medical to completely nonmedical applications. Games can provide not only entertainment but also strong motivation for practicing, thereby achieving better control with rehabilitation system. In this paper we present real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors. We used wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movement. A control interface was developed to test the proposed algorithm in real-time experiments with opened and closed eyes. Using visual feedback, a mean classification accuracy of 77.3% was obtained for control with six commands. And a mean classification accuracy of 80.2% was obtained using auditory feedback for control with five commands. The algorithm was then applied for controlling direction and speed of character movement in two-dimensional video game. Results showed that the proposed algorithm had an efficient response speed and timing with a bit rate of 30 bits/min, demonstrating its efficacy and robustness in real-time control.
Real-Time Control of a Video Game Using Eye Movements and Two Temporal EEG Sensors
Saetia, Supat; Zintus-art, Kalanyu; Shin, Duk; Kambara, Hiroyuki; Yoshimura, Natsue; Berrached, Nasreddine; Koike, Yasuharu
2015-01-01
EEG-controlled gaming applications range widely from strictly medical to completely nonmedical applications. Games can provide not only entertainment but also strong motivation for practicing, thereby achieving better control with rehabilitation system. In this paper we present real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors. We used wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movement. A control interface was developed to test the proposed algorithm in real-time experiments with opened and closed eyes. Using visual feedback, a mean classification accuracy of 77.3% was obtained for control with six commands. And a mean classification accuracy of 80.2% was obtained using auditory feedback for control with five commands. The algorithm was then applied for controlling direction and speed of character movement in two-dimensional video game. Results showed that the proposed algorithm had an efficient response speed and timing with a bit rate of 30 bits/min, demonstrating its efficacy and robustness in real-time control. PMID:26690500
Tactical Conflict Detection in Terminal Airspace
NASA Technical Reports Server (NTRS)
Tang, Huabin; Robinson, John E.; Denery, Dallas G.
2010-01-01
Air traffic systems have long relied on automated short-term conflict prediction algorithms to warn controllers of impending conflicts (losses of separation). The complexity of terminal airspace has proven difficult for such systems as it often leads to excessive false alerts. Thus, the legacy system, called Conflict Alert, which provides short-term alerts in both en-route and terminal airspace currently, is often inhibited or degraded in areas where frequent false alerts occur, even though the alerts are provided only when an aircraft is in dangerous proximity of other aircraft. This research investigates how a minimal level of flight intent information may be used to improve short-term conflict detection in terminal airspace such that it can be used by the controller to maintain legal aircraft separation. The flight intent information includes a site-specific nominal arrival route and inferred altitude clearances in addition to the flight plan that includes the RNAV (Area Navigation) departure route. A new tactical conflict detection algorithm is proposed, which uses a single analytic trajectory, determined by the flight intent and the current state information of the aircraft, and includes a complex set of current, dynamic separation standards for terminal airspace to define losses of separation. The new algorithm is compared with an algorithm that imitates a known en-route algorithm and another that imitates Conflict Alert by analysis of false-alert rate and alert lead time with recent real-world data of arrival and departure operations and a large set of operational error cases from Dallas/Fort Worth TRACON (Terminal Radar Approach Control). The new algorithm yielded a false-alert rate of two per hour and an average alert lead time of 38 seconds.
Alcator C-Mod Digital Plasma Control System
NASA Astrophysics Data System (ADS)
Wolfe, S. M.
2005-10-01
A new digital plasma control system (DPCS) has been implemented for Alcator C-Mod. The new system was put into service at the start of the 2005 run campaign and has been in routine operation since. The system consists of two 64-input, 16-output cPCI digitizers attached to a rack-mounted single-CPU Linux server, which performs both the I/O and the computation. During initial operation, the system was set up to directly emulate the original C-Mod ``Hybrid'' MIMO linear control system. Compatibility with the previous control system allows the existing user interface software and data structures to be used with the new hardware. The control program is written in IDL and runs under standard Linux. Interrupts are disabled during the plasma pulses to achieve real-time operation. A synchronous loop is executed with a nominal cycle rate of 10 kHz. Emulation of the original linear control algorithms requires 50 μsec per iteration, with the time evenly split between I/O and computation, so rates of about 20 KHz are achievable. Reliable vertical position control has been demonstrated with cycle rates as low as 5 KHz. Additional computations, including non-linear algorithms and adaptive response, are implemented as optional procedure calls within the main real-time loop.
An Algorithm for Creating Virtual Controls Using Integrated and Harmonized Longitudinal Data.
Hansen, William B; Chen, Shyh-Huei; Saldana, Santiago; Ip, Edward H
2018-06-01
We introduce a strategy for creating virtual control groups-cases generated through computer algorithms that, when aggregated, may serve as experimental comparators where live controls are difficult to recruit, such as when programs are widely disseminated and randomization is not feasible. We integrated and harmonized data from eight archived longitudinal adolescent-focused data sets spanning the decades from 1980 to 2010. Collectively, these studies examined numerous psychosocial variables and assessed past 30-day alcohol, cigarette, and marijuana use. Additional treatment and control group data from two archived randomized control trials were used to test the virtual control algorithm. Both randomized controlled trials (RCTs) assessed intentions, normative beliefs, and values as well as past 30-day alcohol, cigarette, and marijuana use. We developed an algorithm that used percentile scores from the integrated data set to create age- and gender-specific latent psychosocial scores. The algorithm matched treatment case observed psychosocial scores at pretest to create a virtual control case that figuratively "matured" based on age-related changes, holding the virtual case's percentile constant. Virtual controls matched treatment case occurrence, eliminating differential attrition as a threat to validity. Virtual case substance use was estimated from the virtual case's latent psychosocial score using logistic regression coefficients derived from analyzing the treatment group. Averaging across virtual cases created group estimates of prevalence. Two criteria were established to evaluate the adequacy of virtual control cases: (1) virtual control group pretest drug prevalence rates should match those of the treatment group and (2) virtual control group patterns of drug prevalence over time should match live controls. The algorithm successfully matched pretest prevalence for both RCTs. Increases in prevalence were observed, although there were discrepancies between live and virtual control outcomes. This study provides an initial framework for creating virtual controls using a step-by-step procedure that can now be revised and validated using other prevention trial data.
Optimization-based Approach to Cross-layer Resource Management in Wireless Networked Control Systems
2013-05-01
interest from both academia and industry [37], finding applications in un- manned robotic vehicles, automated highways and factories, smart homes and...is stable when the scaler varies slowly. The algorithm is further extended to utilize the slack resource in the network, which leads to the...model . . . . . . . . . . . . . . . . 66 Optimal sampling rate allocation formulation . . . . . 67 Price-based algorithm
Application of a Smart Parachute Release Algorithm to the CPAS Test Architecture
NASA Technical Reports Server (NTRS)
Bledsoe, Kristin
2013-01-01
One of the primary test vehicles for the Capsule Parachute Assembly System (CPAS) is the Parachute Test Vehicle (PTV), a capsule shaped structure similar to the Orion design but truncated to fit in the cargo area of a C-17 aircraft. The PTV has a full Orion-like parachute compartment and similar aerodynamics; however, because of the single point attachment of the CPAS parachutes and the lack of Orion-like Reaction Control System (RCS), the PTV has the potential to reach significant body rates. High body rates at the time of the Drogue release may cause the PTV to flip while the parachutes deploy, which may result in the severing of the Pilot or Main risers. In order to prevent high rates at the time of Drogue release, a "smart release" algorithm was implemented in the PTV avionics system. This algorithm, which was developed for the Orion Flight system, triggers the Drogue parachute release when the body rates are near a minimum. This paper discusses the development and testing of the smart release algorithm; its implementation in the PTV avionics and the pretest simulation; and the results of its use on two CPAS tests.
Tuning of active vibration controllers for ACTEX by genetic algorithm
NASA Astrophysics Data System (ADS)
Kwak, Moon K.; Denoyer, Keith K.
1999-06-01
This paper is concerned with the optimal tuning of digitally programmable analog controllers on the ACTEX-1 smart structures flight experiment. The programmable controllers for each channel include a third order Strain Rate Feedback (SRF) controller, a fifth order SRF controller, a second order Positive Position Feedback (PPF) controller, and a fourth order PPF controller. Optimal manual tuning of several control parameters can be a difficult task even though the closed-loop control characteristics of each controller are well known. Hence, the automatic tuning of individual control parameters using Genetic Algorithms is proposed in this paper. The optimal control parameters of each control law are obtained by imposing a constraint on the closed-loop frequency response functions using the ACTEX mathematical model. The tuned control parameters are then uploaded to the ACTEX electronic control electronics and experiments on the active vibration control are carried out in space. The experimental results on ACTEX will be presented.
Method for controlling gas metal arc welding
Smartt, Herschel B.; Einerson, Carolyn J.; Watkins, Arthur D.
1989-01-01
The heat input and mass input in a Gas Metal Arc welding process are controlled by a method that comprises calculating appropriate values for weld speed, filler wire feed rate and an expected value for the welding current by algorithmic function means, applying such values for weld speed and filler wire feed rate to the welding process, measuring the welding current, comparing the measured current to the calculated current, using said comparison to calculate corrections for the weld speed and filler wire feed rate, and applying corrections.
NASA Astrophysics Data System (ADS)
Yeung, Chung-Hei (Simon)
The study of compressor instabilities in gas turbine engines has received much attention in recent years. In particular, rotating stall and surge are major causes of problems ranging from component stress and lifespan reduction to engine explosion. In this thesis, modeling and control of rotating stall and surge using bleed valve and air injection is studied and validated on a low speed, single stage, axial compressor at Caltech. Bleed valve control of stall is achieved only when the compressor characteristic is actuated, due to the fast growth rate of the stall cell compared to the rate limit of the valve. Furthermore, experimental results show that the actuator rate requirement for stall control is reduced by a factor of fourteen via compressor characteristic actuation. Analytical expressions based on low order models (2--3 states) and a high fidelity simulation (37 states) tool are developed to estimate the minimum rate requirement of a bleed valve for control of stall. A comparison of the tools to experiments show a good qualitative agreement, with increasing quantitative accuracy as the complexity of the underlying model increases. Air injection control of stall and surge is also investigated. Simultaneous control of stall and surge is achieved using axisymmetric air injection. Three cases with different injector back pressure are studied. Surge control via binary air injection is achieved in all three cases. Simultaneous stall and surge control is achieved for two of the cases, but is not achieved for the lowest authority case. This is consistent with previous results for control of stall with axisymmetric air injection without a plenum attached. Non-axisymmetric air injection control of stall and surge is also studied. Three existing control algorithms found in literature are modeled and analyzed. A three-state model is obtained for each algorithm. For two cases, conditions for linear stability and bifurcation criticality on control of rotating stall are derived and expressed in terms of implementation-oriented variables such as number of injectors. For the third case, bifurcation criticality conditions are not obtained due to complexity, though linear stability property is derived. A theoretical comparison between the three algorithms is made, via the use of low-order models, to investigate pros and cons of the algorithms in the context of operability. The effects of static distortion on the compressor facility at Caltech is characterized experimentally. Results consistent with literature are obtained. Simulations via a high fidelity model (34 states) are also performed and show good qualitative as well as quantitative agreement to experiments. A non-axisymmetric pulsed air injection controller for stall is shown to be robust to static distortion.
Backup Attitude Control Algorithms for the MAP Spacecraft
NASA Technical Reports Server (NTRS)
ODonnell, James R., Jr.; Andrews, Stephen F.; Ericsson-Jackson, Aprille J.; Flatley, Thomas W.; Ward, David K.; Bay, P. Michael
1999-01-01
The Microwave Anisotropy Probe (MAP) is a follow-on to the Differential Microwave Radiometer (DMR) instrument on the Cosmic Background Explorer (COBE) spacecraft. The MAP spacecraft will perform its mission, studying the early origins of the universe, in a Lissajous orbit around the Earth-Sun L(sub 2) Lagrange point. Due to limited mass, power, and financial resources, a traditional reliability concept involving fully redundant components was not feasible. This paper will discuss the redundancy philosophy used on MAP, describe the hardware redundancy selected (and why), and present backup modes and algorithms that were designed in lieu of additional attitude control hardware redundancy to improve the odds of mission success. Three of these modes have been implemented in the spacecraft flight software. The first onboard mode allows the MAP Kalman filter to be used with digital sun sensor (DSS) derived rates, in case of the failure of one of MAP's two two-axis inertial reference units. Similarly, the second onboard mode allows a star tracker only mode, using attitude and derived rate from one or both of MAP's star trackers for onboard attitude determination and control. The last backup mode onboard allows a sun-line angle offset to be commanded that will allow solar radiation pressure to be used for momentum management and orbit stationkeeping. In addition to the backup modes implemented on the spacecraft, two backup algorithms have been developed in the event of less likely contingencies. One of these is an algorithm for implementing an alternative scan pattern to MAP's nominal dual-spin science mode using only one or two reaction wheels and thrusters. Finally, an algorithm has been developed that uses thruster one shots while in science mode for momentum management. This algorithm has been developed in case system momentum builds up faster than anticipated, to allow adequate momentum management while minimizing interruptions to science. In this paper, each mode and algorithm will be discussed, and simulation results presented.
Subband Image Coding with Jointly Optimized Quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.
1995-01-01
An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.
Multifeature-based high-resolution palmprint recognition.
Dai, Jifeng; Zhou, Jie
2011-05-01
Palmprint is a promising biometric feature for use in access control and forensic applications. Previous research on palmprint recognition mainly concentrates on low-resolution (about 100 ppi) palmprints. But for high-security applications (e.g., forensic usage), high-resolution palmprints (500 ppi or higher) are required from which more useful information can be extracted. In this paper, we propose a novel recognition algorithm for high-resolution palmprint. The main contributions of the proposed algorithm include the following: 1) use of multiple features, namely, minutiae, density, orientation, and principal lines, for palmprint recognition to significantly improve the matching performance of the conventional algorithm. 2) Design of a quality-based and adaptive orientation field estimation algorithm which performs better than the existing algorithm in case of regions with a large number of creases. 3) Use of a novel fusion scheme for an identification application which performs better than conventional fusion methods, e.g., weighted sum rule, SVMs, or Neyman-Pearson rule. Besides, we analyze the discriminative power of different feature combinations and find that density is very useful for palmprint recognition. Experimental results on the database containing 14,576 full palmprints show that the proposed algorithm has achieved a good performance. In the case of verification, the recognition system's False Rejection Rate (FRR) is 16 percent, which is 17 percent lower than the best existing algorithm at a False Acceptance Rate (FAR) of 10(-5), while in the identification experiment, the rank-1 live-scan partial palmprint recognition rate is improved from 82.0 to 91.7 percent.
A Framework for Optimal Control Allocation with Structural Load Constraints
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Taylor, Brian R.; Jutte, Christine V.; Burken, John J.; Trinh, Khanh V.; Bodson, Marc
2010-01-01
Conventional aircraft generally employ mixing algorithms or lookup tables to determine control surface deflections needed to achieve moments commanded by the flight control system. Control allocation is the problem of converting desired moments into control effector commands. Next generation aircraft may have many multipurpose, redundant control surfaces, adding considerable complexity to the control allocation problem. These issues can be addressed with optimal control allocation. Most optimal control allocation algorithms have control surface position and rate constraints. However, these constraints are insufficient to ensure that the aircraft's structural load limits will not be exceeded by commanded surface deflections. In this paper, a framework is proposed to enable a flight control system with optimal control allocation to incorporate real-time structural load feedback and structural load constraints. A proof of concept simulation that demonstrates the framework in a simulation of a generic transport aircraft is presented.
A Spiking Neural Network in sEMG Feature Extraction.
Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor
2015-11-03
We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control.
Approximated affine projection algorithm for feedback cancellation in hearing aids.
Lee, Sangmin; Kim, In-Young; Park, Young-Cheol
2007-09-01
We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.
Making predictions in a changing world-inference, uncertainty, and learning.
O'Reilly, Jill X
2013-01-01
To function effectively, brains need to make predictions about their environment based on past experience, i.e., they need to learn about their environment. The algorithms by which learning occurs are of interest to neuroscientists, both in their own right (because they exist in the brain) and as a tool to model participants' incomplete knowledge of task parameters and hence, to better understand their behavior. This review focusses on a particular challenge for learning algorithms-how to match the rate at which they learn to the rate of change in the environment, so that they use as much observed data as possible whilst disregarding irrelevant, old observations. To do this algorithms must evaluate whether the environment is changing. We discuss the concepts of likelihood, priors and transition functions, and how these relate to change detection. We review expected and estimation uncertainty, and how these relate to change detection and learning rate. Finally, we consider the neural correlates of uncertainty and learning. We argue that the neural correlates of uncertainty bear a resemblance to neural systems that are active when agents actively explore their environments, suggesting that the mechanisms by which the rate of learning is set may be subject to top down control (in circumstances when agents actively seek new information) as well as bottom up control (by observations that imply change in the environment).
Li, Dan; Hu, Xiaoguang
2017-03-01
Because of the high availability requirements from weapon equipment, an in-depth study has been conducted on the real-time fault-tolerance of the widely applied Compact PCI (CPCI) bus measurement and control system. A redundancy design method that uses heartbeat detection to connect the primary and alternate devices has been developed. To address the low successful execution rate and relatively large waste of time slices in the primary version of the task software, an improved algorithm for real-time fault-tolerant scheduling is proposed based on the Basic Checking available time Elimination idle time (BCE) algorithm, applying a single-neuron self-adaptive proportion sum differential (PSD) controller. The experimental validation results indicate that this system has excellent redundancy and fault-tolerance, and the newly developed method can effectively improve the system availability. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Intelligent bandwidth compression
NASA Astrophysics Data System (ADS)
Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.
1980-02-01
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.
Latifi, Kujtim; Oliver, Jasmine; Baker, Ryan; Dilling, Thomas J; Stevens, Craig W; Kim, Jongphil; Yue, Binglin; Demarco, Marylou; Zhang, Geoffrey G; Moros, Eduardo G; Feygelman, Vladimir
2014-04-01
Pencil beam (PB) and collapsed cone convolution (CCC) dose calculation algorithms differ significantly when used in the thorax. However, such differences have seldom been previously directly correlated with outcomes of lung stereotactic ablative body radiation (SABR). Data for 201 non-small cell lung cancer patients treated with SABR were analyzed retrospectively. All patients were treated with 50 Gy in 5 fractions of 10 Gy each. The radiation prescription mandated that 95% of the planning target volume (PTV) receive the prescribed dose. One hundred sixteen patients were planned with BrainLab treatment planning software (TPS) with the PB algorithm and treated on a Novalis unit. The other 85 were planned on the Pinnacle TPS with the CCC algorithm and treated on a Varian linac. Treatment planning objectives were numerically identical for both groups. The median follow-up times were 24 and 17 months for the PB and CCC groups, respectively. The primary endpoint was local/marginal control of the irradiated lesion. Gray's competing risk method was used to determine the statistical differences in local/marginal control rates between the PB and CCC groups. Twenty-five patients planned with PB and 4 patients planned with the CCC algorithms to the same nominal doses experienced local recurrence. There was a statistically significant difference in recurrence rates between the PB and CCC groups (hazard ratio 3.4 [95% confidence interval: 1.18-9.83], Gray's test P=.019). The differences (Δ) between the 2 algorithms for target coverage were as follows: ΔD99GITV = 7.4 Gy, ΔD99PTV = 10.4 Gy, ΔV90GITV = 13.7%, ΔV90PTV = 37.6%, ΔD95PTV = 9.8 Gy, and ΔDISO = 3.4 Gy. GITV = gross internal tumor volume. Local control in patients receiving who were planned to the same nominal dose with PB and CCC algorithms were statistically significantly different. Possible alternative explanations are described in the report, although they are not thought likely to explain the difference. We conclude that the difference is due to relative dosimetric underdosing of tumors with the PB algorithm. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Milman, Mark H.
1987-01-01
The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary systems. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.
NASA Technical Reports Server (NTRS)
Milman, Mark H.
1988-01-01
The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary schemes. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.
Nonlinear convergence active vibration absorber for single and multiple frequency vibration control
NASA Astrophysics Data System (ADS)
Wang, Xi; Yang, Bintang; Guo, Shufeng; Zhao, Wenqiang
2017-12-01
This paper presents a nonlinear convergence algorithm for active dynamic undamped vibration absorber (ADUVA). The damping of absorber is ignored in this algorithm to strengthen the vibration suppressing effect and simplify the algorithm at the same time. The simulation and experimental results indicate that this nonlinear convergence ADUVA can help significantly suppress vibration caused by excitation of both single and multiple frequency. The proposed nonlinear algorithm is composed of equivalent dynamic modeling equations and frequency estimator. Both the single and multiple frequency ADUVA are mathematically imitated by the same mechanical structure with a mass body and a voice coil motor (VCM). The nonlinear convergence estimator is applied to simultaneously satisfy the requirements of fast convergence rate and small steady state frequency error, which are incompatible for linear convergence estimator. The convergence of the nonlinear algorithm is mathematically proofed, and its non-divergent characteristic is theoretically guaranteed. The vibration suppressing experiments demonstrate that the nonlinear ADUVA can accelerate the convergence rate of vibration suppressing and achieve more decrement of oscillation attenuation than the linear ADUVA.
Neuroprosthetic Decoder Training as Imitation Learning.
Merel, Josh; Carlson, David; Paninski, Liam; Cunningham, John P
2016-05-01
Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.
Evolutionary algorithms for the optimization of advective control of contaminated aquifer zones
NASA Astrophysics Data System (ADS)
Bayer, Peter; Finkel, Michael
2004-06-01
Simple genetic algorithms (SGAs) and derandomized evolution strategies (DESs) are employed to adapt well capture zones for the hydraulic optimization of pump-and-treat systems. A hypothetical contaminant site in a heterogeneous aquifer serves as an application template. On the basis of the results from numerical flow modeling, particle tracking is applied to delineate the pathways of the contaminants. The objective is to find the minimum pumping rate of up to eight recharge wells within a downgradient well placement area. Both the well coordinates and the pumping rates are subject to optimization, leading to a mixed discrete-continuous problem. This article discusses the ideal formulation of the objective function for which the number of particles and the total pumping rate are used as decision criteria. Boundary updating is introduced, which enables the reorganization of the decision space limits by the incorporation of experience from previous optimization runs. Throughout the study the algorithms' capabilities are evaluated in terms of the number of model runs which are needed to identify optimal and suboptimal solutions. Despite the complexity of the problem both evolutionary algorithm variants prove to be suitable for finding suboptimal solutions. The DES with weighted recombination reveals to be the ideal algorithm to find optimal solutions. Though it works with real-coded decision parameters, it proves to be suitable for adjusting discrete well positions. Principally, the representation of well positions as binary strings in the SGA is ideal. However, even if the SGA takes advantage of bookkeeping, the vital high discretization of pumping rates results in long binary strings, which escalates the model runs that are needed to find an optimal solution. Since the SGA string lengths increase with the number of wells, the DES gains superiority, particularly for an increasing number of wells. As the DES is a self-adaptive algorithm, it proves to be the more robust optimization method for the selected advective control problem than the SGA variants of this study, exhibiting a less stochastic search which is reflected in the minor variability of the found solutions.
A Generic Guidance and Control Structure for Six-Degree-of-Freedom Conceptual Aircraft Design
NASA Technical Reports Server (NTRS)
Cotting, M. Christopher; Cox, Timothy H.
2005-01-01
A control system framework is presented for both real-time and batch six-degree-of-freedom simulation. This framework allows stabilization and control with multiple command options, from body rate control to waypoint guidance. Also, pilot commands can be used to operate the simulation in a pilot-in-the-loop environment. This control system framework is created by using direct vehicle state feedback with nonlinear dynamic inversion. A direct control allocation scheme is used to command aircraft effectors. Online B-matrix estimation is used in the control allocation algorithm for maximum algorithm flexibility. Primary uses for this framework include conceptual design and early preliminary design of aircraft, where vehicle models change rapidly and a knowledge of vehicle six-degree-of-freedom performance is required. A simulated airbreathing hypersonic vehicle and a simulated high performance fighter are controlled to demonstrate the flexibility and utility of the control system.
Cos, Oriol; Ramon, Ramon; Montesinos, José Luis; Valero, Francisco
2006-09-05
A predictive control algorithm coupled with a PI feedback controller has been satisfactorily implemented in the heterologous Rhizopus oryzae lipase production by Pichia pastoris methanol utilization slow (Mut(s)) phenotype. This control algorithm has allowed the study of the effect of methanol concentration, ranging from 0.5 to 1.75 g/L, on heterologous protein production. The maximal lipolytic activity (490 UA/mL), specific yield (11,236 UA/g(biomass)), productivity (4,901 UA/L . h), and specific productivity (112 UA/g(biomass)h were reached for a methanol concentration of 1 g/L. These parameters are almost double than those obtained with a manual control at a similar methanol set-point. The study of the specific growth, consumption, and production rates showed different patterns for these rates depending on the methanol concentration set-point. Results obtained have shown the need of implementing a robust control scheme when reproducible quality and productivity are sought. It has been demonstrated that the model-based control proposed here is a very efficient, robust, and easy-to-implement strategy from an industrial application point of view. (c) 2006 Wiley Periodicals, Inc.
Clark, Steven L; Hamilton, Emily F; Garite, Thomas J; Timmins, Audra; Warrick, Philip A; Smith, Samuel
2017-02-01
Despite intensive efforts directed at initial training in fetal heart rate interpretation, continuing medical education, board certification/recertification, team training, and the development of specific protocols for the management of abnormal fetal heart rate patterns, the goals of consistently preventing hypoxia-induced fetal metabolic acidemia and neurologic injury remain elusive. The purpose of this study was to validate a recently published algorithm for the management of category II fetal heart rate tracings, to examine reasons for the birth of infants with significant metabolic acidemia despite the use of electronic fetal heart rate monitoring, and to examine critically the limits of electronic fetal heart rate monitoring in the prevention of neonatal metabolic acidemia. The potential performance of electronic fetal heart rate monitoring under ideal circumstances was evaluated in an outcomes-blinded examination fetal heart rate tracing of infants with metabolic acidemia at birth (base deficit, >12) and matched control infants (base deficit, <8) under the following conditions: (1) expert primary interpretation, (2) use of a published algorithm that was developed and endorsed by a large group of national experts, (3) assumption of a 30-minute period of evaluation for noncritical category II fetal heart rate tracings, followed by delivery within 30 minutes, (4) evaluation without the need to provide patient care simultaneously, and (5) comparison of results under these circumstances with those achieved in actual clinical practice. During the study period, 120 infants were identified with an arterial cord blood base deficit of >12 mM/L. Matched control infants were not demographically different from subjects. In actual practice, operative intervention on the basis of an abnormal fetal heart rate tracings occurred in 36 of 120 fetuses (30.0%) with metabolic acidemia. Based on expert, algorithm-assisted reviews, 55 of 120 patients with acidemia (45.8%) were judged to need operative intervention for abnormal fetal heart rate tracings. This difference was significant (P=.016). In infants who were born with a base deficit of >12 mM/L in which blinded, algorithm-assisted expert review indicated the need for operative delivery, the decision for delivery would have been made an average of 131 minutes before the actual delivery. The rate of expert intervention for fetal heart rate concerns in the nonacidemic control group (22/120; 18.3%) was similar to the actual intervention rate (23/120; 19.2%; P=1.0) Expert review did not mandate earlier delivery in 65 of 120 patients with metabolic acidemia. The primary features of these 65 cases included the occurrence of sentinel events with prolonged deceleration just before delivery, the rapid deterioration of nonemergent category II fetal heart rate tracings before realistic time frames for recognition and intervention, and the failure of recognized fetal heart rate patterns such as variability to identify metabolic acidemia. Expert, algorithm-assisted fetal heart rate interpretation has the potential to improve standard clinical performance by facilitating significantly earlier recognition of some tracings that are associated with metabolic acidemia without increasing the rate of operative intervention. However, this improvement is modest. Of infants who are born with metabolic acidemia, only approximately one-half potentially could be identified and have delivery expedited even under ideal circumstances, which are probably not realistic in current US practice. This represents the limits of electronic fetal heart rate monitoring performance. Additional technologies will be necessary if the goal of the prevention of neonatal metabolic acidemia is to be realized. Copyright © 2016 Elsevier Inc. All rights reserved.
Skinner, James E; Anchin, Jerry M; Weiss, Daniel N
2008-01-01
Heart rate variability (HRV) reflects both cardiac autonomic function and risk of arrhythmic death (AD). Reduced indices of HRV based on linear stochastic models are independent risk factors for AD in post-myocardial infarct cohorts. Indices based on nonlinear deterministic models have a significantly higher sensitivity and specificity for predicting AD in retrospective data. A need exists for nonlinear analytic software easily used by a medical technician. In the current study, an automated nonlinear algorithm, the time-dependent point correlation dimension (PD2i), was evaluated. The electrocardiogram (ECG) data were provided through an National Institutes of Health-sponsored internet archive (PhysioBank) and consisted of all 22 malignant arrhythmia ECG files (VF/VT) and 22 randomly selected arrhythmia files as the controls. The results were blindly calculated by automated software (Vicor 2.0, Vicor Technologies, Inc., Boca Raton, FL) and showed all analyzable VF/VT files had PD2i < 1.4 and all analyzable controls had PD2i > 1.4. Five VF/VT and six controls were excluded because surrogate testing showed the RR-intervals to contain noise, possibly resulting from the low digitization rate of the ECGs. The sensitivity was 100%, specificity 85%, relative risk > 100; p < 0.01, power > 90%. Thus, automated heartbeat analysis by the time-dependent nonlinear PD2i-algorithm can accurately stratify risk of AD in public data made available for competitive testing of algorithms. PMID:18728829
NASA Technical Reports Server (NTRS)
Childs, A. G.
1971-01-01
A discrete steepest ascent method which allows controls which are not piecewise constant (for example, it allows all continuous piecewise linear controls) was derived for the solution of optimal programming problems. This method is based on the continuous steepest ascent method of Bryson and Denham and new concepts introduced by Kelley and Denham in their development of compatible adjoints for taking into account the effects of numerical integration. The method is a generalization of the algorithm suggested by Canon, Cullum, and Polak with the details of the gradient computation given. The discrete method was compared with the continuous method for an aerodynamics problem for which an analytic solution is given by Pontryagin's maximum principle, and numerical results are presented. The discrete method converges more rapidly than the continuous method at first, but then for some undetermined reason, loses its exponential convergence rate. A comparsion was also made for the algorithm of Canon, Cullum, and Polak using piecewise constant controls. This algorithm is very competitive with the continuous algorithm.
Method for controlling gas metal arc welding
Smartt, H.B.; Einerson, C.J.; Watkins, A.D.
1987-08-10
The heat input and mass input in a Gas Metal Arc welding process are controlled by a method that comprises calculating appropriate values for weld speed, filler wire feed rate and an expected value for the welding current by algorithmic function means, applying such values for weld speed and filler wire feed rate to the welding process, measuring the welding current, comparing the measured current to the calculated current, using said comparison to calculate corrections for the weld speed and filler wire feed rate, and applying corrections. 3 figs., 1 tab.
Fault-tolerant nonlinear adaptive flight control using sliding mode online learning.
Krüger, Thomas; Schnetter, Philipp; Placzek, Robin; Vörsmann, Peter
2012-08-01
An expanded nonlinear model inversion flight control strategy using sliding mode online learning for neural networks is presented. The proposed control strategy is implemented for a small unmanned aircraft system (UAS). This class of aircraft is very susceptible towards nonlinearities like atmospheric turbulence, model uncertainties and of course system failures. Therefore, these systems mark a sensible testbed to evaluate fault-tolerant, adaptive flight control strategies. Within this work the concept of feedback linearization is combined with feed forward neural networks to compensate for inversion errors and other nonlinear effects. Backpropagation-based adaption laws of the network weights are used for online training. Within these adaption laws the standard gradient descent backpropagation algorithm is augmented with the concept of sliding mode control (SMC). Implemented as a learning algorithm, this nonlinear control strategy treats the neural network as a controlled system and allows a stable, dynamic calculation of the learning rates. While considering the system's stability, this robust online learning method therefore offers a higher speed of convergence, especially in the presence of external disturbances. The SMC-based flight controller is tested and compared with the standard gradient descent backpropagation algorithm in the presence of system failures. Copyright © 2012 Elsevier Ltd. All rights reserved.
Error field optimization in DIII-D using extremum seeking control
Lanctot, M. J.; Olofsson, K. E. J.; Capella, M.; ...
2016-06-03
A closed-loop error field control algorithm is implemented in the Plasma Control System of the DIII-D tokamak and used to identify optimal control currents during a single plasma discharge. The algorithm, based on established extremum seeking control theory, exploits the link in tokamaks between maximizing the toroidal angular momentum and minimizing deleterious non-axisymmetric magnetic fields. Slowly-rotating n = 1 fields (the dither), generated by external coils, are used to perturb the angular momentum, monitored in real-time using a charge-exchange spectroscopy diagnostic. Simple signal processing of the rotation measurements extracts information about the rotation gradient with respect to the control coilmore » currents. This information is used to converge the control coil currents to a point that maximizes the toroidal angular momentum. The technique is well-suited for multi-coil, multi-harmonic error field optimizations in disruption sensitive devices as it does not require triggering locked tearing modes or plasma current disruptions. Control simulations highlight the importance of the initial search direction on the rate of the convergence, and identify future algorithm upgrades that may allow more rapid convergence that projects to convergence times in ITER on the order of tens of seconds.« less
Flight data processing with the F-8 adaptive algorithm
NASA Technical Reports Server (NTRS)
Hartmann, G.; Stein, G.; Petersen, K.
1977-01-01
An explicit adaptive control algorithm based on maximum likelihood estimation of parameters has been designed for NASA's DFBW F-8 aircraft. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm has been implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer and surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software. The software and its performance evaluation based on flight data are described
Rainfall Estimation over the Nile Basin using an Adapted Version of the SCaMPR Algorithm
NASA Astrophysics Data System (ADS)
Habib, E. H.; Kuligowski, R. J.; Elshamy, M. E.; Ali, M. A.; Haile, A.; Amin, D.; Eldin, A.
2011-12-01
Management of Egypt's Aswan High Dam is critical not only for flood control on the Nile but also for ensuring adequate water supplies for most of Egypt since rainfall is scarce over the vast majority of its land area. However, reservoir inflow is driven by rainfall over Sudan, Ethiopia, Uganda, and several other countries from which routine rain gauge data are sparse. Satellite-derived estimates of rainfall offer a much more detailed and timely set of data to form a basis for decisions on the operation of the dam. A single-channel infrared algorithm is currently in operational use at the Egyptian Nile Forecast Center (NFC). This study reports on the adaptation of a multi-spectral, multi-instrument satellite rainfall estimation algorithm (Self-Calibrating Multivariate Precipitation Retrieval, SCaMPR) for operational application over the Nile Basin. The algorithm uses a set of rainfall predictors from multi-spectral Infrared cloud top observations and self-calibrates them to a set of predictands from Microwave (MW) rain rate estimates. For application over the Nile Basin, the SCaMPR algorithm uses multiple satellite IR channels recently available to NFC from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). Microwave rain rates are acquired from multiple sources such as SSM/I, SSMIS, AMSU, AMSR-E, and TMI. The algorithm has two main steps: rain/no-rain separation using discriminant analysis, and rain rate estimation using stepwise linear regression. We test two modes of algorithm calibration: real-time calibration with continuous updates of coefficients with newly coming MW rain rates, and calibration using static coefficients that are derived from IR-MW data from past observations. We also compare the SCaMPR algorithm to other global-scale satellite rainfall algorithms (e.g., 'Tropical Rainfall Measuring Mission (TRMM) and other sources' (TRMM-3B42) product, and the National Oceanographic and Atmospheric Administration Climate Prediction Center (NOAA-CPC) CMORPH product. The algorithm has several potential future applications such as: improving the performance accuracy of hydrologic forecasting models over the Nile Basin, and utilizing the enhanced rainfall datasets and better-calibrated hydrologic models to assess the impacts of climate change on the region's water availability.
NASA Astrophysics Data System (ADS)
Kumar, Rishi; Mevada, N. Ramesh; Rathore, Santosh; Agarwal, Nitin; Rajput, Vinod; Sinh Barad, AjayPal
2017-08-01
To improve Welding quality of aluminum (Al) plate, the TIG Welding system has been prepared, by which Welding current, Shielding gas flow rate and Current polarity can be controlled during Welding process. In the present work, an attempt has been made to study the effect of Welding current, current polarity, and shielding gas flow rate on the tensile strength of the weld joint. Based on the number of parameters and their levels, the Response Surface Methodology technique has been selected as the Design of Experiment. For understanding the influence of input parameters on Ultimate tensile strength of weldment, ANOVA analysis has been carried out. Also to describe and optimize TIG Welding using a new metaheuristic Nature - inspired algorithm which is called as Firefly algorithm which was developed by Dr. Xin-She Yang at Cambridge University in 2007. A general formulation of firefly algorithm is presented together with an analytical, mathematical modeling to optimize the TIG Welding process by a single equivalent objective function.
Opto-numerical procedures supporting dynamic lower limbs monitoring and their medical diagnosis
NASA Astrophysics Data System (ADS)
Witkowski, Marcin; Kujawińska, Malgorzata; Rapp, Walter; Sitnik, Robert
2006-01-01
New optical full-field shape measurement systems allow transient shape capture at rates between 15 and 30 Hz. These frequency rates are enough to monitor controlled movements used e.g. for medical examination purposes. In this paper we present a set of algorithms which may be applied for processing of data gathered by fringe projection method implemented for lower limbs shape measurement. The purpose of presented algorithms is to locate anatomical structures based on the limb shape and its deformation in time. The algorithms are based on local surface curvature calculation and analysis of curvature maps changes during the measurement sequence. One of anatomical structure of high medical interest that is possible to scan and analyze, is patella. Tracking of patella position and orientation under dynamic conditions may lead to detect pathological patella movements and help in knee joint disease diagnosis. Therefore the usefulness of the algorithms developed was proven at examples of patella localization and monitoring.
NASA Astrophysics Data System (ADS)
Lu, Lin; Chang, Yunlong; Li, Yingmin; Lu, Ming
2013-05-01
An orthogonal experiment was conducted by the means of multivariate nonlinear regression equation to adjust the influence of external transverse magnetic field and Ar flow rate on welding quality in the process of welding condenser pipe by high-speed argon tungsten-arc welding (TIG for short). The magnetic induction and flow rate of Ar gas were used as optimum variables, and tensile strength of weld was set to objective function on the base of genetic algorithm theory, and then an optimal design was conducted. According to the request of physical production, the optimum variables were restrained. The genetic algorithm in the MATLAB was used for computing. A comparison between optimum results and experiment parameters was made. The results showed that the optimum technologic parameters could be chosen by the means of genetic algorithm with the conditions of excessive optimum variables in the process of high-speed welding. And optimum technologic parameters of welding coincided with experiment results.
A real time sorting algorithm to time sort any deterministic time disordered data stream
NASA Astrophysics Data System (ADS)
Saini, J.; Mandal, S.; Chakrabarti, A.; Chattopadhyay, S.
2017-12-01
In new generation high intensity high energy physics experiments, millions of free streaming high rate data sources are to be readout. Free streaming data with associated time-stamp can only be controlled by thresholds as there is no trigger information available for the readout. Therefore, these readouts are prone to collect large amount of noise and unwanted data. For this reason, these experiments can have output data rate of several orders of magnitude higher than the useful signal data rate. It is therefore necessary to perform online processing of the data to extract useful information from the full data set. Without trigger information, pre-processing on the free streaming data can only be done with time based correlation among the data set. Multiple data sources have different path delays and bandwidth utilizations and therefore the unsorted merged data requires significant computational efforts for real time manifestation of sorting before analysis. Present work reports a new high speed scalable data stream sorting algorithm with its architectural design, verified through Field programmable Gate Array (FPGA) based hardware simulation. Realistic time based simulated data likely to be collected in an high energy physics experiment have been used to study the performance of the algorithm. The proposed algorithm uses parallel read-write blocks with added memory management and zero suppression features to make it efficient for high rate data-streams. This algorithm is best suited for online data streams with deterministic time disorder/unsorting on FPGA like hardware.
Dynamics simulation and controller interfacing for legged robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reichler, J.A.; Delcomyn, F.
2000-01-01
Dynamics simulation can play a critical role in the engineering of robotic control code, and there exist a variety of strategies both for building physical models and for interacting with these models. This paper presents an approach to dynamics simulation and controller interfacing for legged robots, and contrasts it to existing approaches. The authors describe dynamics algorithms and contact-resolution strategies for multibody articulated mobile robots based on the decoupled tree-structure approach, and present a novel scripting language that provides a unified framework for control-code interfacing, user-interface design, and data analysis. Special emphasis is placed on facilitating the rapid integration ofmore » control algorithms written in a standard object-oriented language (C++), the production of modular, distributed, reusable controllers, and the use of parameterized signal-transmission properties such as delay, sampling rate, and noise.« less
Mochizuki, Tomoki; Amagai, Takashi; Tani, Akira
2018-09-01
Monoterpenes emitted from plants contribute to the formation of secondary pollution and affect the climate system. Monoterpene emission rates may be affected by environmental changes such as increasing CO 2 concentration caused by fossil fuel burning and drought stress induced by climate change. We measured monoterpene emissions from Cryptomeria japonica clone saplings grown under different CO 2 concentrations (control: ambient CO 2 level, elevated CO 2 : 1000μmolmol -1 ). The saplings were planted in the ground and we did not artificially control the SWC. The relationship between the monoterpene emissions and naturally varying SWC was investigated. The dominant monoterpene was α-pinene, followed by sabinene. The monoterpene emission rates were exponentially correlated with temperature for all measurements and normalized (35°C) for each measurement day. The daily normalized monoterpene emission rates (E s0.10 ) were positively and linearly correlated with SWC under both control and elevated CO 2 conditions (control: r 2 =0.55, elevated CO 2 : r 2 =0.89). The slope of the regression line of E s0.10 against SWC was significantly higher under elevated CO 2 than under control conditions (ANCOVA: P<0.01), indicating that the effect of CO 2 concentration on monoterpene emission rates differed by soil water status. The monoterpene emission rates estimated by considering temperature and SWC (Improved G93 algorithm) better agreed with the measured monoterpene emission rates, when compared with the emission rates estimated by considering temperature alone (G93 algorithm). Our results demonstrated that the combined effects of SWC and CO 2 concentration are important for controlling the monoterpene emissions from C. japonica clone saplings. If these relationships can be applied to the other coniferous tree species, our results may be useful to improve accuracy of monoterpene emission estimates from the coniferous forests as affected by climate change in the present and foreseeable future. Copyright © 2018 Elsevier B.V. All rights reserved.
A sweep algorithm for massively parallel simulation of circuit-switched networks
NASA Technical Reports Server (NTRS)
Gaujal, Bruno; Greenberg, Albert G.; Nicol, David M.
1992-01-01
A new massively parallel algorithm is presented for simulating large asymmetric circuit-switched networks, controlled by a randomized-routing policy that includes trunk-reservation. A single instruction multiple data (SIMD) implementation is described, and corresponding experiments on a 16384 processor MasPar parallel computer are reported. A multiple instruction multiple data (MIMD) implementation is also described, and corresponding experiments on an Intel IPSC/860 parallel computer, using 16 processors, are reported. By exploiting parallelism, our algorithm increases the possible execution rate of such complex simulations by as much as an order of magnitude.
Serious injury prediction algorithm based on large-scale data and under-triage control.
Nishimoto, Tetsuya; Mukaigawa, Kosuke; Tominaga, Shigeru; Lubbe, Nils; Kiuchi, Toru; Motomura, Tomokazu; Matsumoto, Hisashi
2017-01-01
The present study was undertaken to construct an algorithm for an advanced automatic collision notification system based on national traffic accident data compiled by Japanese police. While US research into the development of a serious-injury prediction algorithm is based on a logistic regression algorithm using the National Automotive Sampling System/Crashworthiness Data System, the present injury prediction algorithm was based on comprehensive police data covering all accidents that occurred across Japan. The particular focus of this research is to improve the rescue of injured vehicle occupants in traffic accidents, and the present algorithm assumes the use of an onboard event data recorder data from which risk factors such as pseudo delta-V, vehicle impact location, seatbelt wearing or non-wearing, involvement in a single impact or multiple impact crash and the occupant's age can be derived. As a result, a simple and handy algorithm suited for onboard vehicle installation was constructed from a sample of half of the available police data. The other half of the police data was applied to the validation testing of this new algorithm using receiver operating characteristic analysis. An additional validation was conducted using in-depth investigation of accident injuries in collaboration with prospective host emergency care institutes. The validated algorithm, named the TOYOTA-Nihon University algorithm, proved to be as useful as the US URGENCY and other existing algorithms. Furthermore, an under-triage control analysis found that the present algorithm could achieve an under-triage rate of less than 10% by setting a threshold of 8.3%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Description and performance analysis of a generalized optimal algorithm for aerobraking guidance
NASA Technical Reports Server (NTRS)
Evans, Steven W.; Dukeman, Greg A.
1993-01-01
A practical real-time guidance algorithm has been developed for aerobraking vehicles which nearly minimizes the maximum heating rate, the maximum structural loads, and the post-aeropass delta V requirement for orbit insertion. The algorithm is general and reusable in the sense that a minimum of assumptions are made, thus greatly reducing the number of parameters that must be determined prior to a given mission. A particularly interesting feature is that in-plane guidance performance is tuned by adjusting one mission-dependent, the bank margin; similarly, the out-of-plane guidance performance is tuned by adjusting a plane controller time constant. Other features of the algorithm are simplicity, efficiency and ease of use. The trimmed vehicle with bank angle modulation as the method of trajectory control. Performance of this guidance algorithm is examined by its use in an aerobraking testbed program. The performance inquiry extends to a wide range of entry speeds covering a number of potential mission applications. Favorable results have been obtained with a minimum of development effort, and directions for improvement of performance are indicated.
NASA Technical Reports Server (NTRS)
Neiner, G. H.; Cole, G. L.; Arpasi, D. J.
1972-01-01
Digital computer control of a mixed-compression inlet is discussed. The inlet was terminated with a choked orifice at the compressor face station to dynamically simulate a turbojet engine. Inlet diffuser exit airflow disturbances were used. A digital version of a previously tested analog control system was used for both normal shock and restart control. Digital computer algorithms were derived using z-transform and finite difference methods. Using a sample rate of 1000 samples per second, the digital normal shock and restart controls essentially duplicated the inlet analog computer control results. At a sample rate of 100 samples per second, the control system performed adequately but was less stable.
Modeling inter-signal arrival times for accurate detection of CAN bus signal injection attacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, Michael Roy; Bridges, Robert A; Combs, Frank L
Modern vehicles rely on hundreds of on-board electronic control units (ECUs) communicating over in-vehicle networks. As external interfaces to the car control networks (such as the on-board diagnostic (OBD) port, auxiliary media ports, etc.) become common, and vehicle-to-vehicle / vehicle-to-infrastructure technology is in the near future, the attack surface for vehicles grows, exposing control networks to potentially life-critical attacks. This paper addresses the need for securing the CAN bus by detecting anomalous traffic patterns via unusual refresh rates of certain commands. While previous works have identified signal frequency as an important feature for CAN bus intrusion detection, this paper providesmore » the first such algorithm with experiments on five attack scenarios. Our data-driven anomaly detection algorithm requires only five seconds of training time (on normal data) and achieves true positive / false discovery rates of 0.9998/0.00298, respectively (micro-averaged across the five experimental tests).« less
NASA Astrophysics Data System (ADS)
Nicolosi, L.; Abt, F.; Blug, A.; Heider, A.; Tetzlaff, R.; Höfler, H.
2012-01-01
Real-time monitoring of laser beam welding (LBW) has increasingly gained importance in several manufacturing processes ranging from automobile production to precision mechanics. In the latter, a novel algorithm for the real-time detection of spatters was implemented in a camera based on cellular neural networks. The latter can be connected to the optics of commercially available laser machines leading to real-time monitoring of LBW processes at rates up to 15 kHz. Such high monitoring rates allow the integration of other image evaluation tasks such as the detection of the full penetration hole for real-time control of process parameters.
Angular Rate Sensing with GyroWheel Using Genetic Algorithm Optimized Neural Networks.
Zhao, Yuyu; Zhao, Hui; Huo, Xin; Yao, Yu
2017-07-22
GyroWheel is an integrated device that can provide three-axis control torques and two-axis angular rate sensing for small spacecrafts. Large tilt angle of its rotor and de-tuned spin rate lead to a complex and non-linear dynamics as well as difficulties in measuring angular rates. In this paper, the problem of angular rate sensing with the GyroWheel is investigated. Firstly, a simplified rate sensing equation is introduced, and the error characteristics of the method are analyzed. According to the analysis results, a rate sensing principle based on torque balance theory is developed, and a practical way to estimate the angular rates within the whole operating range of GyroWheel is provided by using explicit genetic algorithm optimized neural networks. The angular rates can be determined by the measurable values of the GyroWheel (including tilt angles, spin rate and torque coil currents), the weights and the biases of the neural networks. Finally, the simulation results are presented to illustrate the effectiveness of the proposed angular rate sensing method with GyroWheel.
Neuroprosthetic Decoder Training as Imitation Learning
Merel, Josh; Paninski, Liam; Cunningham, John P.
2016-01-01
Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user’s intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user’s intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector. PMID:27191387
\\mathscr{H}_2 optimal control techniques for resistive wall mode feedback in tokamaks
NASA Astrophysics Data System (ADS)
Clement, Mitchell; Hanson, Jeremy; Bialek, Jim; Navratil, Gerald
2018-04-01
DIII-D experiments show that a new, advanced algorithm enables resistive wall mode (RWM) stability control in high performance discharges using external coils. DIII-D can excite strong, locked or nearly locked external kink modes whose rotation frequencies and growth rates are on the order of the magnetic flux diffusion time of the vacuum vessel wall. Experiments have shown that modern control techniques like linear quadratic Gaussian (LQG) control require less current than the proportional controller in use at DIII-D when using control coils external to DIII-D’s vacuum vessel. Experiments were conducted to develop control of a rotating n = 1 perturbation using an LQG controller derived from VALEN and external coils. Feedback using this LQG algorithm outperformed a proportional gain only controller in these perturbation experiments over a range of frequencies. Results from high βN experiments also show that advanced feedback techniques using external control coils may be as effective as internal control coil feedback using classical control techniques.
Autonomous proximity operations using machine vision for trajectory control and pose estimation
NASA Technical Reports Server (NTRS)
Cleghorn, Timothy F.; Sternberg, Stanley R.
1991-01-01
A machine vision algorithm was developed which permits guidance control to be maintained during autonomous proximity operations. At present this algorithm exists as a simulation, running upon an 80386 based personal computer, using a ModelMATE CAD package to render the target vehicle. However, the algorithm is sufficiently simple, so that following off-line training on a known target vehicle, it should run in real time with existing vision hardware. The basis of the algorithm is a sequence of single camera images of the target vehicle, upon which radial transforms were performed. Selected points of the resulting radial signatures are fed through a decision tree, to determine whether the signature matches that of the known reference signatures for a particular view of the target. Based upon recognized scenes, the position of the maneuvering vehicle with respect to the target vehicles can be calculated, and adjustments made in the former's trajectory. In addition, the pose and spin rates of the target satellite can be estimated using this method.
Turksoy, Kamuran; Samadi, Sediqeh; Feng, Jianyuan; Littlejohn, Elizabeth; Quinn, Laurie; Cinar, Ali
2016-01-01
A novel meal-detection algorithm is developed based on continuous glucose measurements. Bergman's minimal model is modified and used in an unscented Kalman filter for state estimations. The estimated rate of appearance of glucose is used for meal detection. Data from nine subjects are used to assess the performance of the algorithm. The results indicate that the proposed algorithm works successfully with high accuracy. The average change in glucose levels between the meals and the detection points is 16(±9.42) [mg/dl] for 61 successfully detected meals and snacks. The algorithm is developed as a new module of an integrated multivariable adaptive artificial pancreas control system. Meal detection with the proposed method is used to administer insulin boluses and prevent most of postprandial hyperglycemia without any manual meal announcements. A novel meal bolus calculation method is proposed and tested with the UVA/Padova simulator. The results indicate significant reduction in hyperglycemia.
Exercise, Insulin Absorption Rates, and Artificial Pancreas Control
NASA Astrophysics Data System (ADS)
Frank, Spencer; Hinshaw, Ling; Basu, Rita; Basu, Ananda; Szeri, Andrew J.
2016-11-01
Type 1 Diabetes is characterized by an inability of a person to endogenously produce the hormone insulin. Because of this, insulin must be injected - usually subcutaneously. The size of the injected dose and the rate at which the dose reaches the circulatory system have a profound effect on the ability to control glucose excursions, and therefore control of diabetes. However, insulin absorption rates via subcutaneous injection are variable and depend on a number of factors including tissue perfusion, physical activity (vasodilation, increased capillary throughput), and other tissue geometric and physical properties. Exercise may also have a sizeable effect on the rate of insulin absorption, which can potentially lead to dangerous glucose levels. Insulin-dosing algorithms, as implemented in an artificial pancreas controller, should account accurately for absorption rate variability and exercise effects on insulin absorption. The aforementioned factors affecting insulin absorption will be discussed within the context of both fluid mechanics and data driven modeling approaches.
Controlled electromigration protocol revised
NASA Astrophysics Data System (ADS)
Zharinov, Vyacheslav S.; Baumans, Xavier D. A.; Silhanek, Alejandro V.; Janssens, Ewald; Van de Vondel, Joris
2018-04-01
Electromigration has evolved from an important cause of failure in electronic devices to an appealing method, capable of modifying the material properties and geometry of nanodevices. Although this technique has been successfully used by researchers to investigate low dimensional systems and nanoscale objects, its low controllability remains a serious limitation. This is in part due to the inherent stochastic nature of the process, but also due to the inappropriate identification of the relevant control parameters. In this study, we identify a suitable process variable and propose a novel control algorithm that enhances the controllability and, at the same time, minimizes the intervention of an operator. As a consequence, the algorithm facilitates the application of electromigration to systems that require exceptional control of, for example, the width of a narrow junction. It is demonstrated that the electromigration rate can be stabilized on pre-set values, which eventually defines the final geometry of the electromigrated structures.
Villarejo, María Viqueira; Zapirain, Begoña García; Zorrilla, Amaia Méndez
2013-01-01
This paper presents the results of using a commercial pulsimeter as an electrocardiogram (ECG) for wireless detection of cardiac alterations and stress levels for home control. For these purposes, signal processing techniques (Continuous Wavelet Transform (CWT) and J48) have been used, respectively. The designed algorithm analyses the ECG signal and is able to detect the heart rate (99.42%), arrhythmia (93.48%) and extrasystoles (99.29%). The detection of stress level is complemented with Skin Conductance Response (SCR), whose success is 94.02%. The heart rate variability does not show added value to the stress detection in this case. With this pulsimeter, it is possible to prevent and detect anomalies for a non-intrusive way associated to a telemedicine system. It is also possible to use it during physical activity due to the fact the CWT minimizes the motion artifacts. PMID:23666135
Villarejo, María Viqueira; Zapirain, Begoña García; Zorrilla, Amaia Méndez
2013-05-10
This paper presents the results of using a commercial pulsimeter as an electrocardiogram (ECG) for wireless detection of cardiac alterations and stress levels for home control. For these purposes, signal processing techniques (Continuous Wavelet Transform (CWT) and J48) have been used, respectively. The designed algorithm analyses the ECG signal and is able to detect the heart rate (99.42%), arrhythmia (93.48%) and extrasystoles (99.29%). The detection of stress level is complemented with Skin Conductance Response (SCR), whose success is 94.02%. The heart rate variability does not show added value to the stress detection in this case. With this pulsimeter, it is possible to prevent and detect anomalies for a non-intrusive way associated to a telemedicine system. It is also possible to use it during physical activity due to the fact the CWT minimizes the motion artifacts.
Xiao, Feng; Kong, Lingjiang; Chen, Jian
2017-06-01
A rapid-search algorithm to improve the beam-steering efficiency for a liquid crystal optical phased array was proposed and experimentally demonstrated in this paper. This proposed algorithm, in which the value of steering efficiency is taken as the objective function and the controlling voltage codes are considered as the optimization variables, consisted of a detection stage and a construction stage. It optimized the steering efficiency in the detection stage and adjusted its search direction adaptively in the construction stage to avoid getting caught in a wrong search space. Simulations had been conducted to compare the proposed algorithm with the widely used pattern-search algorithm using criteria of convergence rate and optimized efficiency. Beam-steering optimization experiments had been performed to verify the validity of the proposed method.
NASA Astrophysics Data System (ADS)
Maginnis, P. A.; West, M.; Dullerud, G. E.
2016-10-01
We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a ;black-box;, i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.
NASA Technical Reports Server (NTRS)
Dyakonov, Artem A.; Buck, Gregory M.; Decaro, Anthony D.
2009-01-01
The analysis of effects of the reaction control system jet plumes on aftbody heating of Orion entry capsule is presented. The analysis covered hypersonic continuum part of the entry trajectory. Aerothermal environments at flight conditions were evaluated using Langley Aerothermal Upwind Relaxation Algorithm (LAURA) code and Data Parallel Line Relaxation (DPLR) algorithm code. Results show a marked augmentation of aftbody heating due to roll, yaw and aft pitch thrusters. No significant augmentation is expected due to forward pitch thrusters. Of the conditions surveyed the maximum heat rate on the aftshell is expected when firing a pair of roll thrusters at a maximum deceleration condition.
Linear triangular optimization technique and pricing scheme in residential energy management systems
NASA Astrophysics Data System (ADS)
Anees, Amir; Hussain, Iqtadar; AlKhaldi, Ali Hussain; Aslam, Muhammad
2018-06-01
This paper presents a new linear optimization algorithm for power scheduling of electric appliances. The proposed system is applied in a smart home community, in which community controller acts as a virtual distribution company for the end consumers. We also present a pricing scheme between community controller and its residential users based on real-time pricing and likely block rates. The results of the proposed optimization algorithm demonstrate that by applying the anticipated technique, not only end users can minimise the consumption cost, but it can also reduce the power peak to an average ratio which will be beneficial for the utilities as well.
Design and experiment of vehicular charger AC/DC system based on predictive control algorithm
NASA Astrophysics Data System (ADS)
He, Guangbi; Quan, Shuhai; Lu, Yuzhang
2018-06-01
For the car charging stage rectifier uncontrollable system, this paper proposes a predictive control algorithm of DC/DC converter based on the prediction model, established by the state space average method and its prediction model, obtained by the optimal mathematical description of mathematical calculation, to analysis prediction algorithm by Simulink simulation. The design of the structure of the car charging, at the request of the rated output power and output voltage adjustable control circuit, the first stage is the three-phase uncontrolled rectifier DC voltage Ud through the filter capacitor, after by using double-phase interleaved buck-boost circuit with wide range output voltage required value, analyzing its working principle and the the parameters for the design and selection of components. The analysis of current ripple shows that the double staggered parallel connection has the advantages of reducing the output current ripple and reducing the loss. The simulation experiment of the whole charging circuit is carried out by software, and the result is in line with the design requirements of the system. Finally combining the soft with hardware circuit to achieve charging of the system according to the requirements, experimental platform proved the feasibility and effectiveness of the proposed predictive control algorithm based on the car charging of the system, which is consistent with the simulation results.
Optimal trajectories for aeroassisted orbital transfer
NASA Technical Reports Server (NTRS)
Miele, A.; Venkataraman, P.
1983-01-01
Consideration is given to classical and minimax problems involved in aeroassisted transfer from high earth orbit (HEO) to low earth orbit (LEO). The transfer is restricted to coplanar operation, with trajectory control effected by means of lift modulation. The performance of the maneuver is indexed to the energy expenditure or, alternatively, the time integral of the heating rate. Firist-order optimality conditions are defined for the classical approach, as are a sequential gradient-restoration algorithm and a combined gradient-restoration algorithm. Minimization techniques are presented for the aeroassisted transfer energy consumption and time-delay integral of the heating rate, as well as minimization of the pressure. It is shown that the eigenvalues of the Jacobian matrix of the differential system is both stiff and unstable, implying that the sequential gradient restoration algorithm in its present version is unsuitable. A new method, involving a multipoint approach to the two-poing boundary value problem, is recommended.
A Rate-Based Congestion Control Algorithm for the SURAP 4 Packet Radio Architecture (SRNTN-72)
1990-01-01
factor, one packet. as at connection initialization. However. these TCP enhancements do not solve the fairness problem. The slow start algorithm ma...and signal interference (including lamming) and by the delavs demanded b% the hink -layer protocols in the absence of contention for resources at the...values. This role would be redundant if bits-per-second rations used measurements of packet duration to determine how fast to decrease, at the expense of
NASA Astrophysics Data System (ADS)
Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik
2017-03-01
Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.
Intelligent bandwith compression
NASA Astrophysics Data System (ADS)
Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.
1980-02-01
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.
Increasing BCI communication rates with dynamic stopping towards more practical use: an ALS study
NASA Astrophysics Data System (ADS)
Mainsah, B. O.; Collins, L. M.; Colwell, K. A.; Sellers, E. W.; Ryan, D. B.; Caves, K.; Throckmorton, C. S.
2015-02-01
Objective. The P300 speller is a brain-computer interface (BCI) that can possibly restore communication abilities to individuals with severe neuromuscular disabilities, such as amyotrophic lateral sclerosis (ALS), by exploiting elicited brain signals in electroencephalography (EEG) data. However, accurate spelling with BCIs is slow due to the need to average data over multiple trials to increase the signal-to-noise ratio (SNR) of the elicited brain signals. Probabilistic approaches to dynamically control data collection have shown improved performance in non-disabled populations; however, validation of these approaches in a target BCI user population has not occurred. Approach. We have developed a data-driven algorithm for the P300 speller based on Bayesian inference that improves spelling time by adaptively selecting the number of trials based on the acute SNR of a user’s EEG data. We further enhanced the algorithm by incorporating information about the user’s language. In this current study, we test and validate the algorithms online in a target BCI user population, by comparing the performance of the dynamic stopping (DS) (or early stopping) algorithms against the current state-of-the-art method, static data collection, where the amount of data collected is fixed prior to online operation. Main results. Results from online testing of the DS algorithms in participants with ALS demonstrate a significant increase in communication rate as measured in bits/min (100-300%), and theoretical bit rate (100-550%), while maintaining selection accuracy. Participants also overwhelmingly preferred the DS algorithms. Significance. We have developed a viable BCI algorithm that has been tested in a target BCI population which has the potential for translation to improve BCI speller performance towards more practical use for communication.
Increasing BCI communication rates with dynamic stopping towards more practical use: an ALS study.
Mainsah, B O; Collins, L M; Colwell, K A; Sellers, E W; Ryan, D B; Caves, K; Throckmorton, C S
2015-02-01
The P300 speller is a brain-computer interface (BCI) that can possibly restore communication abilities to individuals with severe neuromuscular disabilities, such as amyotrophic lateral sclerosis (ALS), by exploiting elicited brain signals in electroencephalography (EEG) data. However, accurate spelling with BCIs is slow due to the need to average data over multiple trials to increase the signal-to-noise ratio (SNR) of the elicited brain signals. Probabilistic approaches to dynamically control data collection have shown improved performance in non-disabled populations; however, validation of these approaches in a target BCI user population has not occurred. We have developed a data-driven algorithm for the P300 speller based on Bayesian inference that improves spelling time by adaptively selecting the number of trials based on the acute SNR of a user's EEG data. We further enhanced the algorithm by incorporating information about the user's language. In this current study, we test and validate the algorithms online in a target BCI user population, by comparing the performance of the dynamic stopping (DS) (or early stopping) algorithms against the current state-of-the-art method, static data collection, where the amount of data collected is fixed prior to online operation. Results from online testing of the DS algorithms in participants with ALS demonstrate a significant increase in communication rate as measured in bits/min (100-300%), and theoretical bit rate (100-550%), while maintaining selection accuracy. Participants also overwhelmingly preferred the DS algorithms. We have developed a viable BCI algorithm that has been tested in a target BCI population which has the potential for translation to improve BCI speller performance towards more practical use for communication.
Increasing BCI Communication Rates with Dynamic Stopping Towards More Practical Use: An ALS Study
Mainsah, B. O.; Collins, L. M.; Colwell, K. A.; Sellers, E. W.; Ryan, D. B.; Caves, K.; Throckmorton, C. S.
2015-01-01
Objective The P300 speller is a brain-computer interface (BCI) that can possibly restore communication abilities to individuals with severe neuromuscular disabilities, such as amyotrophic lateral sclerosis (ALS), by exploiting elicited brain signals in electroencephalography data. However, accurate spelling with BCIs is slow due to the need to average data over multiple trials to increase the signal-to-noise ratio of the elicited brain signals. Probabilistic approaches to dynamically control data collection have shown improved performance in non-disabled populations; however, validation of these approaches in a target BCI user population has not occurred. Approach We have developed a data-driven algorithm for the P300 speller based on Bayesian inference that improves spelling time by adaptively selecting the number of trials based on the acute signal-to-noise ratio of a user’s electroencephalography data. We further enhanced the algorithm by incorporating information about the user’s language. In this current study, we test and validate the algorithms online in a target BCI user population, by comparing the performance of the dynamic stopping (or early stopping) algorithms against the current state-of-the-art method, static data collection, where the amount of data collected is fixed prior to online operation. Main Results Results from online testing of the dynamic stopping algorithms in participants with ALS demonstrate a significant increase in communication rate as measured in bits/sec (100-300%), and theoretical bit rate (100-550%), while maintaining selection accuracy. Participants also overwhelmingly preferred the dynamic stopping algorithms. Significance We have developed a viable BCI algorithm that has been tested in a target BCI population which has the potential for translation to improve BCI speller performance towards more practical use for communication. PMID:25588137
Generalized Momentum Control of the Spin-Stabilized Magnetospheric Multiscale Formation
NASA Technical Reports Server (NTRS)
Queen, Steven Z.; Shah, Neerav; Benegalrao, Suyog S.; Blackman, Kathie
2015-01-01
The Magnetospheric Multiscale (MMS) mission consists of four identically instrumented, spin-stabilized observatories elliptically orbiting the Earth in a tetrahedron formation. The on-board attitude control system adjusts the angular momentum of the system using a generalized thruster-actuated control system that simultaneously manages precession, nutation and spin. Originally developed using Lyapunov control-theory with rate-feedback, a published algorithm has been augmented to provide a balanced attitude/rate response using a single weighting parameter. This approach overcomes an orientation sign-ambiguity in the existing formulation, and also allows for a smoothly tuned-response applicable to both a compact/agile spacecraft, as well as one with large articulating appendages.
Semi-autonomous unmanned ground vehicle control system
NASA Astrophysics Data System (ADS)
Anderson, Jonathan; Lee, Dah-Jye; Schoenberger, Robert; Wei, Zhaoyi; Archibald, James
2006-05-01
Unmanned Ground Vehicles (UGVs) have advantages over people in a number of different applications, ranging from sentry duty, scouting hazardous areas, convoying goods and supplies over long distances, and exploring caves and tunnels. Despite recent advances in electronics, vision, artificial intelligence, and control technologies, fully autonomous UGVs are still far from being a reality. Currently, most UGVs are fielded using tele-operation with a human in the control loop. Using tele-operations, a user controls the UGV from the relative safety and comfort of a control station and sends commands to the UGV remotely. It is difficult for the user to issue higher level commands such as patrol this corridor or move to this position while avoiding obstacles. As computer vision algorithms are implemented in hardware, the UGV can easily become partially autonomous. As Field Programmable Gate Arrays (FPGAs) become larger and more powerful, vision algorithms can run at frame rate. With the rapid development of CMOS imagers for consumer electronics, frame rate can reach as high as 200 frames per second with a small size of the region of interest. This increase in the speed of vision algorithm processing allows the UGVs to become more autonomous, as they are able to recognize and avoid obstacles in their path, track targets, or move to a recognized area. The user is able to focus on giving broad supervisory commands and goals to the UGVs, allowing the user to control multiple UGVs at once while still maintaining the convenience of working from a central base station. In this paper, we will describe a novel control system for the control of semi-autonomous UGVs. This control system combines a user interface similar to a simple tele-operation station along with a control package, including the FPGA and multiple cameras. The control package interfaces with the UGV and provides the necessary control to guide the UGV.
An analysis of value function learning with piecewise linear control
NASA Astrophysics Data System (ADS)
Tutsoy, Onder; Brown, Martin
2016-05-01
Reinforcement learning (RL) algorithms attempt to learn optimal control actions by iteratively estimating a long-term measure of system performance, the so-called value function. For example, RL algorithms have been applied to walking robots to examine the connection between robot motion and the brain, which is known as embodied cognition. In this paper, RL algorithms are analysed using an exemplar test problem. A closed form solution for the value function is calculated and this is represented in terms of a set of basis functions and parameters, which is used to investigate parameter convergence. The value function expression is shown to have a polynomial form where the polynomial terms depend on the plant's parameters and the value function's discount factor. It is shown that the temporal difference error introduces a null space for the differenced higher order basis associated with the effects of controller switching (saturated to linear control or terminating an experiment) apart from the time of the switch. This leads to slow convergence in the relevant subspace. It is also shown that badly conditioned learning problems can occur, and this is a function of the value function discount factor and the controller switching points. Finally, a comparison is performed between the residual gradient and TD(0) learning algorithms, and it is shown that the former has a faster rate of convergence for this test problem.
Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars
Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho
2015-01-01
In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629
NASA Astrophysics Data System (ADS)
Liu, Jinxin; Chen, Xuefeng; Gao, Jiawei; Zhang, Xingwu
2016-12-01
Air vehicles, space vehicles and underwater vehicles, the cabins of which can be viewed as variable section cylindrical structures, have multiple rotational vibration sources (e.g., engines, propellers, compressors and motors), making the spectrum of noise multiple-harmonic. The suppression of such noise has been a focus of interests in the field of active vibration control (AVC). In this paper, a multiple-source multiple-harmonic (MSMH) active vibration suppression algorithm with feed-forward structure is proposed based on reference amplitude rectification and conjugate gradient method (CGM). An AVC simulation scheme called finite element model in-loop simulation (FEMILS) is also proposed for rapid algorithm verification. Numerical studies of AVC are conducted on a variable section cylindrical structure based on the proposed MSMH algorithm and FEMILS scheme. It can be seen from the numerical studies that: (1) the proposed MSMH algorithm can individually suppress each component of the multiple-harmonic noise with an unified and improved convergence rate; (2) the FEMILS scheme is convenient and straightforward for multiple-source simulations with an acceptable loop time. Moreover, the simulations have similar procedure to real-life control and can be easily extended to physical model platform.
NASA Astrophysics Data System (ADS)
Kang, Donghun; Lee, Jungeon; Jung, Jongpil; Lee, Chul-Hee; Kyung, Chong-Min
2014-09-01
In mobile video systems powered by battery, reducing the encoder's compression energy consumption is critical to prolong its lifetime. Previous Energy-rate-distortion (E-R-D) optimization methods based on a software codec is not suitable for practical mobile camera systems because the energy consumption is too large and encoding rate is too low. In this paper, we propose an E-R-D model for the hardware codec based on the gate-level simulation framework to measure the switching activity and the energy consumption. From the proposed E-R-D model, an energy minimizing algorithm for mobile video camera sensor have been developed with the GOP (Group of Pictures) size and QP(Quantization Parameter) as run-time control variables. Our experimental results show that the proposed algorithm provides up to 31.76% of energy consumption saving while satisfying the rate and distortion constraints.
Adaptive threshold control for auto-rate fallback algorithm in IEEE 802.11 multi-rate WLANs
NASA Astrophysics Data System (ADS)
Wu, Qilin; Lu, Yang; Zhu, Xiaolin; Ge, Fangzhen
2012-03-01
The IEEE 802.11 standard supports multiple rates for data transmission in the physical layer. Nowadays, to improve network performance, a rate adaptation scheme called auto-rate fallback (ARF) is widely adopted in practice. However, ARF scheme suffers performance degradation in multiple contending nodes environments. In this article, we propose a novel rate adaptation scheme called ARF with adaptive threshold control. In multiple contending nodes environment, the proposed scheme can effectively mitigate the frame collision effect on rate adaptation decision by adaptively adjusting rate-up and rate-down threshold according to the current collision level. Simulation results show that the proposed scheme can achieve significantly higher throughput than the other existing rate adaptation schemes. Furthermore, the simulation results also demonstrate that the proposed scheme can effectively respond to the varying channel condition.
Terrain mapping and control of unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Kang, Yeonsik
In this thesis, methods for terrain mapping and control of unmanned aerial vehicles (UAVs) are proposed. First, robust obstacle detection and tracking algorithm are introduced to eliminate the clutter noise uncorrelated with the real obstacle. This is an important problem since most types of sensor measurements are vulnerable to noise. In order to eliminate such noise, a Kalman filter-based interacting multiple model (IMM) algorithm is employed to effectively detect obstacles and estimate their positions precisely. Using the outcome of the IMM-based obstacle detection algorithm, a new method of building a probabilistic occupancy grid map is proposed based on Bayes rule in probability theory. Since the proposed map update law uses the outputs of the IMM-based obstacle detection algorithm, simultaneous tracking of moving targets and mapping of stationary obstacles are possible. This can be helpful especially in a noisy outdoor environment where different types of obstacles exist. Another feature of the algorithm is its capability to eliminate clutter noise as well as measurement noise. The proposed algorithm is simulated in Matlab using realistic sensor models. The results show close agreement with the layout of real obstacles. An efficient method called "quadtree" is used to process massive geographical information in a convenient manner. The algorithm is evaluated in a realistic simulation environment called RIPTIDE, which the NASA Ames Research Center developed to access the performance of complicated software for UAVs. Supposing that a UAV is equipped with abovementioned obstacle detection and mapping algorithm, the control problem of a small fixed-wing UAV is studied. A Nonlinear Model Predictive Control (NMPC is designed as a high level controller for the fixed-wing UAV using a kinematic model of the UAV. The kinematic model is employed because of the assumption that there exist low level controls on the UAV. The UAV dynamics are nonlinear with input constraints which is the main challenge explored in this thesis. The control objective of the NMPC is determined to track a desired line, and the analysis of the designed NMPC's stability is followed to find the conditions that can assure stability. Then, the control objective is extended to track adjoined multiple line segments with obstacle avoidance capability. In simulation, the performance of the NMPC is superb with fast convergence and small overshoot. The computation time is not a burden for a fixed-wing UAV controller with a Pentium level on-board computer that provides a reasonable control update rate.
NASA Astrophysics Data System (ADS)
Li, Liang; Jia, Gang; Chen, Jie; Zhu, Hongjun; Cao, Dongpu; Song, Jian
2015-08-01
Direct yaw moment control (DYC), which differentially brakes the wheels to produce a yaw moment for the vehicle stability in a steering process, is an important part of electric stability control system. In this field, most control methods utilise the active brake pressure with a feedback controller to adjust the braked wheel. However, the method might lead to a control delay or overshoot because of the lack of a quantitative project relationship between target values from the upper stability controller to the lower pressure controller. Meanwhile, the stability controller usually ignores the implementing ability of the tyre forces, which might be restrained by the combined-slip dynamics of the tyre. Therefore, a novel control algorithm of DYC based on the hierarchical control strategy is brought forward in this paper. As for the upper controller, a correctional linear quadratic regulator, which not only contains feedback control but also contains feed forward control, is introduced to deduce the object of the stability yaw moment in order to guarantee the yaw rate and side-slip angle stability. As for the medium and lower controller, the quantitative relationship between the vehicle stability object and the target tyre forces of controlled wheels is proposed to achieve smooth control performance based on a combined-slip tyre model. The simulations with the hardware-in-the-loop platform validate that the proposed algorithm can improve the stability of the vehicle effectively.
A PC-based magnetometer-only attitude and rate determination system for gyroless spacecraft
NASA Technical Reports Server (NTRS)
Challa, M.; Natanson, G.; Deutschmann, J.; Galal, K.
1995-01-01
This paper describes a prototype PC-based system that uses measurements from a three-axis magnetometer (TAM) to estimate the state (three-axis attitude and rates) of a spacecraft given no a priori information other than the mass properties. The system uses two algorithms that estimate the spacecraft's state - a deterministic magnetic-field only algorithm and a Kalman filter for gyroless spacecraft. The algorithms are combined by invoking the deterministic algorithm to generate the spacecraft state at epoch using a small batch of data and then using this deterministic epoch solution as the initial condition for the Kalman filter during the production run. System input comprises processed data that includes TAM and reference magnetic field data. Additional information, such as control system data and measurements from line-of-sight sensors, can be input to the system if available. Test results are presented using in-flight data from two three-axis stabilized spacecraft: Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) (gyroless, Sun-pointing) and Earth Radiation Budget Satellite (ERBS) (gyro-based, Earth-pointing). The results show that, using as little as 700 s of data, the system is capable of accuracies of 1.5 deg in attitude and 0.01 deg/s in rates; i.e., within SAMPEX mission requirements.
Using SPOT–5 HRG Data in Panchromatic Mode for Operational Detection of Small Ships in Tropical Area
Corbane, Christina; Marre, Fabrice; Petit, Michel
2008-01-01
Nowadays, there is a growing interest in applications of space remote sensing systems for maritime surveillance which includes among others traffic surveillance, maritime security, illegal fisheries survey, oil discharge and sea pollution monitoring. Within the framework of several French and European projects, an algorithm for automatic ship detection from SPOT–5 HRG data was developed to complement existing fishery control measures, in particular the Vessel Monitoring System. The algorithm focused on feature–based analysis of satellite imagery. Genetic algorithms and Neural Networks were used to deal with the feature–borne information. Based on the described approach, a first prototype was designed to classify small targets such as shrimp boats and tested on panchromatic SPOT–5, 5–m resolution product taking into account the environmental and fishing context. The ability to detect shrimp boats with satisfactory detection rates is an indicator of the robustness of the algorithm. Still, the benchmark revealed problems related to increased false alarm rates on particular types of images with a high percentage of cloud cover and a sea cluttered background. PMID:27879859
A Fault Recognition System for Gearboxes of Wind Turbines
NASA Astrophysics Data System (ADS)
Yang, Zhiling; Huang, Haiyue; Yin, Zidong
2017-12-01
Costs of maintenance and loss of power generation caused by the faults of wind turbines gearboxes are the main components of operation costs for a wind farm. Therefore, the technology of condition monitoring and fault recognition for wind turbines gearboxes is becoming a hot topic. A condition monitoring and fault recognition system (CMFRS) is presented for CBM of wind turbines gearboxes in this paper. The vibration signals from acceleration sensors at different locations of gearbox and the data from supervisory control and data acquisition (SCADA) system are collected to CMFRS. Then the feature extraction and optimization algorithm is applied to these operational data. Furthermore, to recognize the fault of gearboxes, the GSO-LSSVR algorithm is proposed, combining the least squares support vector regression machine (LSSVR) with the Glowworm Swarm Optimization (GSO) algorithm. Finally, the results show that the fault recognition system used in this paper has a high rate for identifying three states of wind turbines’ gears; besides, the combination of date features can affect the identifying rate and the selection optimization algorithm presented in this paper can get a pretty good date feature subset for the fault recognition.
Large Angle Reorientation of a Solar Sail Using Gimballed Mass Control
NASA Astrophysics Data System (ADS)
Sperber, E.; Fu, B.; Eke, F. O.
2016-06-01
This paper proposes a control strategy for the large angle reorientation of a solar sail equipped with a gimballed mass. The algorithm consists of a first stage that manipulates the gimbal angle in order to minimize the attitude error about a single principal axis. Once certain termination conditions are reached, a regulator is employed that selects a single gimbal angle for minimizing both the residual attitude error concomitantly with the body rate. Because the force due to the specular reflection of radiation is always directed along a reflector's surface normal, this form of thrust vector control cannot generate torques about an axis normal to the plane of the sail. Thus, in order to achieve three-axis control authority a 1-2-1 or 2-1-2 sequence of rotations about principal axes is performed. The control algorithm is implemented directly in-line with the nonlinear equations of motion and key performance characteristics are identified.
Investigation of energy management strategies for photovoltaic systems - An analysis technique
NASA Technical Reports Server (NTRS)
Cull, R. C.; Eltimsahy, A. H.
1982-01-01
Progress is reported in formulating energy management strategies for stand-alone PV systems, developing an analytical tool that can be used to investigate these strategies, applying this tool to determine the proper control algorithms and control variables (controller inputs and outputs) for a range of applications, and quantifying the relative performance and economics when compared to systems that do not apply energy management. The analysis technique developed may be broadly applied to a variety of systems to determine the most appropriate energy management strategies, control variables and algorithms. The only inputs required are statistical distributions for stochastic energy inputs and outputs of the system and the system's device characteristics (efficiency and ratings). Although the formulation was originally driven by stand-alone PV system needs, the techniques are also applicable to hybrid and grid connected systems.
Investigation of energy management strategies for photovoltaic systems - An analysis technique
NASA Astrophysics Data System (ADS)
Cull, R. C.; Eltimsahy, A. H.
Progress is reported in formulating energy management strategies for stand-alone PV systems, developing an analytical tool that can be used to investigate these strategies, applying this tool to determine the proper control algorithms and control variables (controller inputs and outputs) for a range of applications, and quantifying the relative performance and economics when compared to systems that do not apply energy management. The analysis technique developed may be broadly applied to a variety of systems to determine the most appropriate energy management strategies, control variables and algorithms. The only inputs required are statistical distributions for stochastic energy inputs and outputs of the system and the system's device characteristics (efficiency and ratings). Although the formulation was originally driven by stand-alone PV system needs, the techniques are also applicable to hybrid and grid connected systems.
NASA Astrophysics Data System (ADS)
Deng, Jie; Yao, Jun; Dewald, Julius P. A.
2005-12-01
In this paper, we attempt to determine a subject's intention of generating torque at the shoulder or elbow, two neighboring joints, using scalp electroencephalogram signals from 163 electrodes for a brain-computer interface (BCI) application. To achieve this goal, we have applied a time-frequency synthesized spatial patterns (TFSP) BCI algorithm with a presorting procedure. Using this method, we were able to achieve an average recognition rate of 89% in four healthy subjects, which is comparable to the highest rates reported in the literature but now for tasks with much closer spatial representations on the motor cortex. This result demonstrates, for the first time, that the TFSP BCI method can be applied to separate intentions between generating static shoulder versus elbow torque. Furthermore, in this study, the potential application of this BCI algorithm for brain-injured patients was tested in one chronic hemiparetic stroke subject. A recognition rate of 76% was obtained, suggesting that this BCI method can provide a potential control signal for neural prostheses or other movement coordination improving devices for patients following brain injury.
NASA Astrophysics Data System (ADS)
Cheng, X. Y.; Wang, H. B.; Jia, Y. L.; Dong, YH
2018-05-01
In this paper, an open-closed-loop iterative learning control (ILC) algorithm is constructed for a class of nonlinear systems subjecting to random data dropouts. The ILC algorithm is implemented by a networked control system (NCS), where only the off-line data is transmitted by network while the real-time data is delivered in the point-to-point way. Thus, there are two controllers rather than one in the control system, which makes better use of the saved and current information and thereby improves the performance achieved by open-loop control alone. During the transfer of off-line data between the nonlinear plant and the remote controller data dropout occurs randomly and the data dropout rate is modeled as a binary Bernoulli random variable. Both measurement and control data dropouts are taken into consideration simultaneously. The convergence criterion is derived based on rigorous analysis. Finally, the simulation results verify the effectiveness of the proposed method.
Algorithm Optimally Allocates Actuation of a Spacecraft
NASA Technical Reports Server (NTRS)
Motaghedi, Shi
2007-01-01
A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.
NASA Astrophysics Data System (ADS)
Wang, Fu; Liu, Bo; Zhang, Lijia; Zhang, Qi; Tian, Qinghua; Tian, Feng; Rao, Lan; Xin, Xiangjun
2017-07-01
Elastic software-defined optical networks greatly improve the flexibility of the optical switching network while it has brought challenges to the routing and spectrum assignment (RSA). A multilayer virtual topology model is proposed to solve RSA problems. Two RSA algorithms based on the virtual topology are proposed, which are the ant colony optimization (ACO) algorithm of minimum consecutiveness loss and the ACO algorithm of maximum spectrum consecutiveness. Due to the computing power of the control layer in the software-defined network, the routing algorithm avoids the frequent link-state information between routers. Based on the effect of the spectrum consecutiveness loss on the pheromone in the ACO, the path and spectrum of the minimal impact on the network are selected for the service request. The proposed algorithms have been compared with other algorithms. The results show that the proposed algorithms can reduce the blocking rate by at least 5% and perform better in spectrum efficiency. Moreover, the proposed algorithms can effectively decrease spectrum fragmentation and enhance available spectrum consecutiveness.
Li, Ming; Miao, Chunyan; Leung, Cyril
2015-01-01
Coverage control is one of the most fundamental issues in directional sensor networks. In this paper, the coverage optimization problem in a directional sensor network is formulated as a multi-objective optimization problem. It takes into account the coverage rate of the network, the number of working sensor nodes and the connectivity of the network. The coverage problem considered in this paper is characterized by the geographical irregularity of the sensed events and heterogeneity of the sensor nodes in terms of sensing radius, field of angle and communication radius. To solve this multi-objective problem, we introduce a learning automata-based coral reef algorithm for adaptive parameter selection and use a novel Tchebycheff decomposition method to decompose the multi-objective problem into a single-objective problem. Simulation results show the consistent superiority of the proposed algorithm over alternative approaches. PMID:26690162
Li, Ming; Miao, Chunyan; Leung, Cyril
2015-12-04
Coverage control is one of the most fundamental issues in directional sensor networks. In this paper, the coverage optimization problem in a directional sensor network is formulated as a multi-objective optimization problem. It takes into account the coverage rate of the network, the number of working sensor nodes and the connectivity of the network. The coverage problem considered in this paper is characterized by the geographical irregularity of the sensed events and heterogeneity of the sensor nodes in terms of sensing radius, field of angle and communication radius. To solve this multi-objective problem, we introduce a learning automata-based coral reef algorithm for adaptive parameter selection and use a novel Tchebycheff decomposition method to decompose the multi-objective problem into a single-objective problem. Simulation results show the consistent superiority of the proposed algorithm over alternative approaches.
JPEG 2000 Encoding with Perceptual Distortion Control
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Liu, Zhen; Karam, Lina J.
2008-01-01
An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.
Time-critical multirate scheduling using contemporary real-time operating system services
NASA Technical Reports Server (NTRS)
Eckhardt, D. E., Jr.
1983-01-01
Although real-time operating systems provide many of the task control services necessary to process time-critical applications (i.e., applications with fixed, invariant deadlines), it may still be necessary to provide a scheduling algorithm at a level above the operating system in order to coordinate a set of synchronized, time-critical tasks executing at different cyclic rates. The scheduling requirements for such applications and develops scheduling algorithms using services provided by contemporary real-time operating systems.
NASA Astrophysics Data System (ADS)
Wang, Chun-yu; He, Lin; Li, Yan; Shuai, Chang-geng
2018-01-01
In engineering applications, ship machinery vibration may be induced by multiple rotational machines sharing a common vibration isolation platform and operating at the same time, and multiple sinusoidal components may be excited. These components may be located at frequencies with large differences or at very close frequencies. A multi-reference filtered-x Newton narrowband (MRFx-Newton) algorithm is proposed to control these multiple sinusoidal components in an MIMO (multiple input and multiple output) system, especially for those located at very close frequencies. The proposed MRFx-Newton algorithm can decouple and suppress multiple sinusoidal components located in the same narrow frequency band even though such components cannot be separated from each other by a narrowband-pass filter. Like the Fx-Newton algorithm, good real-time performance is also achieved by the faster convergence speed brought by the 2nd-order inverse secondary-path filter in the time domain. Experiments are also conducted to verify the feasibility and test the performance of the proposed algorithm installed in an active-passive vibration isolation system in suppressing the vibration excited by an artificial source and air compressor/s. The results show that the proposed algorithm not only has comparable convergence rate as the Fx-Newton algorithm but also has better real-time performance and robustness than the Fx-Newton algorithm in active control of the vibration induced by multiple sound sources/rotational machines working on a shared platform.
A Digital Control Algorithm for Magnetic Suspension Systems
NASA Technical Reports Server (NTRS)
Britton, Thomas C.
1996-01-01
An ongoing program exists to investigate and develop magnetic suspension technologies and modelling techniques at NASA Langley Research Center. Presently, there is a laboratory-scale large air-gap suspension system capable of five degree-of-freedom (DOF) control that is operational and a six DOF system that is under development. Those systems levitate a cylindrical element containing a permanent magnet core above a planar array of electromagnets, which are used for levitation and control purposes. In order to evaluate various control approaches with those systems, the Generic Real-Time State-Space Controller (GRTSSC) software package was developed. That control software package allows the user to implement multiple control methods and allows for varied input/output commands. The development of the control algorithm is presented. The desired functionality of the software is discussed, including the ability to inject noise on sensor inputs and/or actuator outputs. Various limitations, common issues, and trade-offs are discussed including data format precision; the drawbacks of using either Direct Memory Access (DMA), interrupts, or program control techniques for data acquisition; and platform dependent concerns related to the portability of the software, such as memory addressing formats. Efforts to minimize overall controller loop-rate and a comparison of achievable controller sample rates are discussed. The implementation of a modular code structure is presented. The format for the controller input data file and the noise information file is presented. Controller input vector information is available for post-processing by mathematical analysis software such as MATLAB1.
Debbarma, Sanjoy; Saikia, Lalit Chandra; Sinha, Nidul
2014-03-01
Present work focused on automatic generation control (AGC) of a three unequal area thermal systems considering reheat turbines and appropriate generation rate constraints (GRC). A fractional order (FO) controller named as I(λ)D(µ) controller based on crone approximation is proposed for the first time as an appropriate technique to solve the multi-area AGC problem in power systems. A recently developed metaheuristic algorithm known as firefly algorithm (FA) is used for the simultaneous optimization of the gains and other parameters such as order of integrator (λ) and differentiator (μ) of I(λ)D(µ) controller and governor speed regulation parameters (R). The dynamic responses corresponding to optimized I(λ)D(µ) controller gains, λ, μ, and R are compared with that of classical integer order (IO) controllers such as I, PI and PID controllers. Simulation results show that the proposed I(λ)D(µ) controller provides more improved dynamic responses and outperforms the IO based classical controllers. Further, sensitivity analysis confirms the robustness of the so optimized I(λ)D(µ) controller to wide changes in system loading conditions and size and position of SLP. Proposed controller is also found to have performed well as compared to IO based controllers when SLP takes place simultaneously in any two areas or all the areas. Robustness of the proposed I(λ)D(µ) controller is also tested against system parameter variations. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko
We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.
Adaptive Inverse Control for Rotorcraft Vibration Reduction
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
1985-01-01
This thesis extends the Least Mean Square (LMS) algorithm to solve the mult!ple-input, multiple-output problem of alleviating N/Rev (revolutions per minute by number of blades) helicopter fuselage vibration by means of adaptive inverse control. A frequency domain locally linear model is used to represent the transfer matrix relating the higher harmonic pitch control inputs to the harmonic vibration outputs to be controlled. By using the inverse matrix as the controller gain matrix, an adaptive inverse regulator is formed to alleviate the N/Rev vibration. The stability and rate of convergence properties of the extended LMS algorithm are discussed. It is shown that the stability ranges for the elements of the stability gain matrix are directly related to the eigenvalues of the vibration signal information matrix for the learning phase, but not for the control phase. The overall conclusion is that the LMS adaptive inverse control method can form a robust vibration control system, but will require some tuning of the input sensor gains, the stability gain matrix, and the amount of control relaxation to be used. The learning curve of the controller during the learning phase is shown to be quantitatively close to that predicted by averaging the learning curves of the normal modes. For higher order transfer matrices, a rough estimate of the inverse is needed to start the algorithm efficiently. The simulation results indicate that the factor which most influences LMS adaptive inverse control is the product of the control relaxation and the the stability gain matrix. A small stability gain matrix makes the controller less sensitive to relaxation selection, and permits faster and more stable vibration reduction, than by choosing the stability gain matrix large and the control relaxation term small. It is shown that the best selections of the stability gain matrix elements and the amount of control relaxation is basically a compromise between slow, stable convergence and fast convergence with increased possibility of unstable identification. In the simulation studies, the LMS adaptive inverse control algorithm is shown to be capable of adapting the inverse (controller) matrix to track changes in the flight conditions. The algorithm converges quickly for moderate disturbances, while taking longer for larger disturbances. Perfect knowledge of the inverse matrix is not required for good control of the N/Rev vibration. However it is shown that measurement noise will prevent the LMS adaptive inverse control technique from controlling the vibration, unless the signal averaging method presented is incorporated into the algorithm.
Dinucleotide controlled null models for comparative RNA gene prediction.
Gesell, Tanja; Washietl, Stefan
2008-05-27
Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require randomization of multiple alignments can be considered. SISSIz is available as open source C code that can be compiled for every major platform and downloaded here: http://sourceforge.net/projects/sissiz.
A rate-constrained fast full-search algorithm based on block sum pyramid.
Song, Byung Cheol; Chun, Kang-Wook; Ra, Jong Beom
2005-03-01
This paper presents a fast full-search algorithm (FSA) for rate-constrained motion estimation. The proposed algorithm, which is based on the block sum pyramid frame structure, successively eliminates unnecessary search positions according to rate-constrained criterion. This algorithm provides the identical estimation performance to a conventional FSA having rate constraint, while achieving considerable reduction in computation.
Control algorithms and applications of the wavefront sensorless adaptive optics
NASA Astrophysics Data System (ADS)
Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen
2017-10-01
Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.
Multigrid one shot methods for optimal control problems: Infinite dimensional control
NASA Technical Reports Server (NTRS)
Arian, Eyal; Taasan, Shlomo
1994-01-01
The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.
Steady-State Algorithmic Analysis M/M/c Two-Priority Queues with Heterogeneous Rates.
1981-04-21
ALGORITHMIC ANALYSIS OF M/M/c TWO-PRIORITY QUEUES WITH HETEROGENEOUS RATES by Douglas R. Miller An algorithm for steady-state analysis of M/M/c nonpreemptive ...practical algorithm for systems involving more than two priority classes. The preemptive case is simpler than the nonpreemptive case; an algorithm for it...priority nonpreemptive queueing system with arrival rates 1 and X2 and service rates V and p42 * The state space can be described as follows. Let xi,j,k be
Orientation estimation algorithm applied to high-spin projectiles
NASA Astrophysics Data System (ADS)
Long, D. F.; Lin, J.; Zhang, X. M.; Li, J.
2014-06-01
High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm.
Active model-based balancing strategy for self-reconfigurable batteries
NASA Astrophysics Data System (ADS)
Bouchhima, Nejmeddine; Schnierle, Marc; Schulte, Sascha; Birke, Kai Peter
2016-08-01
This paper describes a novel balancing strategy for self-reconfigurable batteries where the discharge and charge rates of each cell can be controlled. While much effort has been focused on improving the hardware architecture of self-reconfigurable batteries, energy equalization algorithms have not been systematically optimized in terms of maximizing the efficiency of the balancing system. Our approach includes aspects of such optimization theory. We develop a balancing strategy for optimal control of the discharge rate of battery cells. We first formulate the cell balancing as a nonlinear optimal control problem, which is modeled afterward as a network program. Using dynamic programming techniques and MATLAB's vectorization feature, we solve the optimal control problem by generating the optimal battery operation policy for a given drive cycle. The simulation results show that the proposed strategy efficiently balances the cells over the life of the battery, an obvious advantage that is absent in the other conventional approaches. Our algorithm is shown to be robust when tested against different influencing parameters varying over wide spectrum on different drive cycles. Furthermore, due to the little computation time and the proved low sensitivity to the inaccurate power predictions, our strategy can be integrated in a real-time system.
NASA Astrophysics Data System (ADS)
Lee, Kangwon
Intelligent vehicle systems, such as Adaptive Cruise Control (ACC) or Collision Warning/Collision Avoidance (CW/CA), are currently under development, and several companies have already offered ACC on selected models. Control or decision-making algorithms of these systems are commonly evaluated under extensive computer simulations and well-defined scenarios on test tracks. However, they have rarely been validated with large quantities of naturalistic human driving data. This dissertation utilized two University of Michigan Transportation Research Institute databases (Intelligent Cruise Control Field Operational Test and System for Assessment of Vehicle Motion Environment) in the development and evaluation of longitudinal driver models and CW/CA algorithms. First, to examine how drivers normally follow other vehicles, the vehicle motion data from the databases were processed using a Kalman smoother. The processed data was then used to fit and evaluate existing longitudinal driver models (e.g., the linear follow-the-leader model, the Newell's special model, the nonlinear follow-the-leader model, the linear optimal control model, the Gipps model and the optimal velocity model). A modified version of the Gipps model was proposed and found to be accurate in both microscopic (vehicle) and macroscopic (traffic) senses. Second, to examine emergency braking behavior and to evaluate CW/CA algorithms, the concepts of signal detection theory and a performance index suitable for unbalanced situations (few threatening data points vs. many safe data points) are introduced. Selected existing CW/CA algorithms were found to have a performance index (geometric mean of true-positive rate and precision) not exceeding 20%. To optimize the parameters of the CW/CA algorithms, a new numerical optimization scheme was developed to replace the original data points with their representative statistics. A new CW/CA algorithm was proposed, which was found to score higher than 55% in the performance index. This dissertation provides a model of how drivers follow lead-vehicles that is much more accurate than other models in the literature. Furthermore, the data-based approach was used to confirm that a CW/CA algorithm utilizing lead-vehicle braking was substantially more effective than existing algorithms, leading to collision warning systems that are much more likely to contribute to driver safety.
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.; Schumann, Johann; Guenther, Kurt; Bosworth, John
2006-01-01
Adaptive control technologies that incorporate learning algorithms have been proposed to enable autonomous flight control and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments [1-2]. At the present time, however, it is unknown how adaptive algorithms can be routinely verified, validated, and certified for use in safety-critical applications. Rigorous methods for adaptive software verification end validation must be developed to ensure that. the control software functions as required and is highly safe and reliable. A large gap appears to exist between the point at which control system designers feel the verification process is complete, and when FAA certification officials agree it is complete. Certification of adaptive flight control software verification is complicated by the use of learning algorithms (e.g., neural networks) and degrees of system non-determinism. Of course, analytical efforts must be made in the verification process to place guarantees on learning algorithm stability, rate of convergence, and convergence accuracy. However, to satisfy FAA certification requirements, it must be demonstrated that the adaptive flight control system is also able to fail and still allow the aircraft to be flown safely or to land, while at the same time providing a means of crew notification of the (impending) failure. It was for this purpose that the NASA Ames Confidence Tool was developed [3]. This paper presents the Confidence Tool as a means of providing in-flight software assurance monitoring of an adaptive flight control system. The paper will present the data obtained from flight testing the tool on a specially modified F-15 aircraft designed to simulate loss of flight control faces.
Dao, Duy; Salehizadeh, S M A; Noh, Yeonsik; Chong, Jo Woon; Cho, Chae Ho; McManus, Dave; Darling, Chad E; Mendelson, Yitzhak; Chon, Ki H
2017-09-01
Motion and noise artifacts (MNAs) impose limits on the usability of the photoplethysmogram (PPG), particularly in the context of ambulatory monitoring. MNAs can distort PPG, causing erroneous estimation of physiological parameters such as heart rate (HR) and arterial oxygen saturation (SpO2). In this study, we present a novel approach, "TifMA," based on using the time-frequency spectrum of PPG to first detect the MNA-corrupted data and next discard the nonusable part of the corrupted data. The term "nonusable" refers to segments of PPG data from which the HR signal cannot be recovered accurately. Two sequential classification procedures were included in the TifMA algorithm. The first classifier distinguishes between MNA-corrupted and MNA-free PPG data. Once a segment of data is deemed MNA-corrupted, the next classifier determines whether the HR can be recovered from the corrupted segment or not. A support vector machine (SVM) classifier was used to build a decision boundary for the first classification task using data segments from a training dataset. Features from time-frequency spectra of PPG were extracted to build the detection model. Five datasets were considered for evaluating TifMA performance: (1) and (2) were laboratory-controlled PPG recordings from forehead and finger pulse oximeter sensors with subjects making random movements, (3) and (4) were actual patient PPG recordings from UMass Memorial Medical Center with random free movements and (5) was a laboratory-controlled PPG recording dataset measured at the forehead while the subjects ran on a treadmill. The first dataset was used to analyze the noise sensitivity of the algorithm. Datasets 2-4 were used to evaluate the MNA detection phase of the algorithm. The results from the first phase of the algorithm (MNA detection) were compared to results from three existing MNA detection algorithms: the Hjorth, kurtosis-Shannon entropy, and time-domain variability-SVM approaches. This last is an approach recently developed in our laboratory. The proposed TifMA algorithm consistently provided higher detection rates than the other three methods, with accuracies greater than 95% for all data. Moreover, our algorithm was able to pinpoint the start and end times of the MNA with an error of less than 1 s in duration, whereas the next-best algorithm had a detection error of more than 2.2 s. The final, most challenging, dataset was collected to verify the performance of the algorithm in discriminating between corrupted data that were usable for accurate HR estimations and data that were nonusable. It was found that on average 48% of the data segments were found to have MNA, and of these, 38% could be used to provide reliable HR estimation.
Cassani, Raymundo; Falk, Tiago H.; Fraga, Francisco J.; Kanda, Paulo A. M.; Anghinah, Renato
2014-01-01
Over the last decade, electroencephalography (EEG) has emerged as a reliable tool for the diagnosis of cortical disorders such as Alzheimer's disease (AD). EEG signals, however, are susceptible to several artifacts, such as ocular, muscular, movement, and environmental. To overcome this limitation, existing diagnostic systems commonly depend on experienced clinicians to manually select artifact-free epochs from the collected multi-channel EEG data. Manual selection, however, is a tedious and time-consuming process, rendering the diagnostic system “semi-automated.” Notwithstanding, a number of EEG artifact removal algorithms have been proposed in the literature. The (dis)advantages of using such algorithms in automated AD diagnostic systems, however, have not been documented; this paper aims to fill this gap. Here, we investigate the effects of three state-of-the-art automated artifact removal (AAR) algorithms (both alone and in combination with each other) on AD diagnostic systems based on four different classes of EEG features, namely, spectral, amplitude modulation rate of change, coherence, and phase. The three AAR algorithms tested are statistical artifact rejection (SAR), blind source separation based on second order blind identification and canonical correlation analysis (BSS-SOBI-CCA), and wavelet enhanced independent component analysis (wICA). Experimental results based on 20-channel resting-awake EEG data collected from 59 participants (20 patients with mild AD, 15 with moderate-to-severe AD, and 24 age-matched healthy controls) showed the wICA algorithm alone outperforming other enhancement algorithm combinations across three tasks: diagnosis (control vs. mild vs. moderate), early detection (control vs. mild), and disease progression (mild vs. moderate), thus opening the doors for fully-automated systems that can assist clinicians with early detection of AD, as well as disease severity progression assessment. PMID:24723886
Cicone, Antonio; Wu, Hau-Tieng
2017-01-01
Despite the population of the noninvasive, economic, comfortable, and easy-to-install photoplethysmography (PPG), it is still lacking a mathematically rigorous and stable algorithm which is able to simultaneously extract from a single-channel PPG signal the instantaneous heart rate (IHR) and the instantaneous respiratory rate (IRR). In this paper, a novel algorithm called deppG is provided to tackle this challenge. deppG is composed of two theoretically solid nonlinear-type time-frequency analyses techniques, the de-shape short time Fourier transform and the synchrosqueezing transform, which allows us to extract the instantaneous physiological information from the PPG signal in a reliable way. To test its performance, in addition to validating the algorithm by a simulated signal and discussing the meaning of “instantaneous,” the algorithm is applied to two publicly available batch databases, the Capnobase and the ICASSP 2015 signal processing cup. The former contains PPG signals relative to spontaneous or controlled breathing in static patients, and the latter is made up of PPG signals collected from subjects doing intense physical activities. The accuracies of the estimated IHR and IRR are compared with the ones obtained by other methods, and represent the state-of-the-art in this field of research. The results suggest the potential of deppG to extract instantaneous physiological information from a signal acquired from widely available wearable devices, even when a subject carries out intense physical activities. PMID:29018352
Modification Of Learning Rate With Lvq Model Improvement In Learning Backpropagation
NASA Astrophysics Data System (ADS)
Tata Hardinata, Jaya; Zarlis, Muhammad; Budhiarti Nababan, Erna; Hartama, Dedy; Sembiring, Rahmat W.
2017-12-01
One type of artificial neural network is a backpropagation, This algorithm trained with the network architecture used during the training as well as providing the correct output to insert a similar but not the same with the architecture in use at training.The selection of appropriate parameters also affects the outcome, value of learning rate is one of the parameters which influence the process of training, Learning rate affects the speed of learning process on the network architecture.If the learning rate is set too large, then the algorithm will become unstable and otherwise the algorithm will converge in a very long period of time.So this study was made to determine the value of learning rate on the backpropagation algorithm. LVQ models of learning rate is one of the models used in the determination of the value of the learning rate of the algorithm LVQ.By modifying this LVQ model to be applied to the backpropagation algorithm. From the experimental results known to modify the learning rate LVQ models were applied to the backpropagation algorithm learning process becomes faster (epoch less).
NASA Technical Reports Server (NTRS)
Hanson, Curt; Miller, Chris; Wall, John H.; Vanzwieten, Tannen S.; Gilligan, Eric; Orr, Jeb S.
2015-01-01
An adaptive augmenting control algorithm for the Space Launch System has been developed at the Marshall Space Flight Center as part of the launch vehicles baseline flight control system. A prototype version of the SLS flight control software was hosted on a piloted aircraft at the Armstrong Flight Research Center to demonstrate the adaptive controller on a full-scale realistic application in a relevant flight environment. Concerns regarding adverse interactions between the adaptive controller and a proposed manual steering mode were investigated by giving the pilot trajectory deviation cues and pitch rate command authority. Two NASA research pilots flew a total of twenty five constant pitch-rate trajectories using a prototype manual steering mode with and without adaptive control.
Memory-Scalable GPU Spatial Hierarchy Construction.
Qiming Hou; Xin Sun; Kun Zhou; Lauterbach, C; Manocha, D
2011-04-01
Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.
[Research on Control System of an Exoskeleton Upper-limb Rehabilitation Robot].
Wang, Lulu; Hu, Xin; Hu, Jie; Fang, Youfang; He, Rongrong; Yu, Hongliu
2016-12-01
In order to help the patients with upper-limb disfunction go on rehabilitation training,this paper proposed an upper-limb exoskeleton rehabilitation robot with four degrees of freedom(DOF),and realized two control schemes,i.e.,voice control and electromyography control.The hardware and software design of the voice control system was completed based on RSC-4128 chips,which realized the speech recognition technology of a specific person.Besides,this study adapted self-made surface eletromyogram(sEMG)signal extraction electrodes to collect sEMG signals and realized pattern recognition by conducting sEMG signals processing,extracting time domain features and fixed threshold algorithm.In addition,the pulse-width modulation(PWM)algorithm was used to realize the speed adjustment of the system.Voice control and electromyography control experiments were then carried out,and the results showed that the mean recognition rate of the voice control and electromyography control reached 93.1%and 90.9%,respectively.The results proved the feasibility of the control system.This study is expected to lay a theoretical foundation for the further improvement of the control system of the upper-limb rehabilitation robot.
Algorithm for automatic analysis of electro-oculographic data
2013-01-01
Background Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. Methods The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. Results The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. Conclusion The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics. PMID:24160372
Algorithm for automatic analysis of electro-oculographic data.
Pettersson, Kati; Jagadeesan, Sharman; Lukander, Kristian; Henelius, Andreas; Haeggström, Edward; Müller, Kiti
2013-10-25
Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics.
Respiratory rate estimation during triage of children in hospitals.
Shah, Syed Ahmar; Fleming, Susannah; Thompson, Matthew; Tarassenko, Lionel
2015-01-01
Accurate assessment of a child's health is critical for appropriate allocation of medical resources and timely delivery of healthcare in Emergency Departments. The accurate measurement of vital signs is a key step in the determination of the severity of illness and respiratory rate is currently the most difficult vital sign to measure accurately. Several previous studies have attempted to extract respiratory rate from photoplethysmogram (PPG) recordings. However, the majority have been conducted in controlled settings using PPG recordings from healthy subjects. In many studies, manual selection of clean sections of PPG recordings was undertaken before assessing the accuracy of the signal processing algorithms developed. Such selection procedures are not appropriate in clinical settings. A major limitation of AR modelling, previously applied to respiratory rate estimation, is an appropriate selection of model order. This study developed a novel algorithm that automatically estimates respiratory rate from a median spectrum constructed applying multiple AR models to processed PPG segments acquired with pulse oximetry using a finger probe. Good-quality sections were identified using a dynamic template-matching technique to assess PPG signal quality. The algorithm was validated on 205 children presenting to the Emergency Department at the John Radcliffe Hospital, Oxford, UK, with reference respiratory rates up to 50 breaths per minute estimated by paediatric nurses. At the time of writing, the authors are not aware of any other study that has validated respiratory rate estimation using data collected from over 200 children in hospitals during routine triage.
A Real Time Controller For Applications In Smart Structures
NASA Astrophysics Data System (ADS)
Ahrens, Christian P.; Claus, Richard O.
1990-02-01
Research in smart structures, especially the area of vibration suppression, has warranted the investigation of advanced computing environments. Real time PC computing power has limited development of high order control algorithms. This paper presents a simple Real Time Embedded Control System (RTECS) in an application of Intelligent Structure Monitoring by way of modal domain sensing for vibration control. It is compared to a PC AT based system for overall functionality and speed. The system employs a novel Reduced Instruction Set Computer (RISC) microcontroller capable of 15 million instructions per second (MIPS) continuous performance and burst rates of 40 MIPS. Advanced Complimentary Metal Oxide Semiconductor (CMOS) circuits are integrated on a single 100 mm by 160 mm printed circuit board requiring only 1 Watt of power. An operating system written in Forth provides high speed operation and short development cycles. The system allows for implementation of Input/Output (I/O) intensive algorithms and provides capability for advanced system development.
Adaptive control and noise suppression by a variable-gain gradient algorithm
NASA Technical Reports Server (NTRS)
Merhav, S. J.; Mehta, R. S.
1987-01-01
An adaptive control system based on normalized LMS filters is investigated. The finite impulse response of the nonparametric controller is adaptively estimated using a given reference model. Specifically, the following issues are addressed: The stability of the closed loop system is analyzed and heuristically established. Next, the adaptation process is studied for piecewise constant plant parameters. It is shown that by introducing a variable-gain in the gradient algorithm, a substantial reduction in the LMS adaptation rate can be achieved. Finally, process noise at the plant output generally causes a biased estimate of the controller. By introducing a noise suppression scheme, this bias can be substantially reduced and the response of the adapted system becomes very close to that of the reference model. Extensive computer simulations validate these and demonstrate assertions that the system can rapidly adapt to random jumps in plant parameters.
NASA Astrophysics Data System (ADS)
Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma
2017-08-01
Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids picture getting bad hit for higher values of quantization parameters. The proposed work was implemented using MATLAB and JM 18.6 reference software. The proposed work measure the performance parameters PSNR, bit rate and compression of intra frame of yuv video sequences in QCIF resolution under different values of quantization parameter with Gaussian value for diagonal down left intra prediction mode. The simulation results of proposed algorithm are tabulated and compared with previous algorithm i.e. Tian et al method. The proposed algorithm achieved reduced in bit rate averagely 30.98% and maintain consistent picture quality for QCIF sequences compared to previous algorithm i.e. Tian et al method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latifi, Kujtim, E-mail: Kujtim.Latifi@Moffitt.org; Oliver, Jasmine; Department of Physics, University of South Florida, Tampa, Florida
Purpose: Pencil beam (PB) and collapsed cone convolution (CCC) dose calculation algorithms differ significantly when used in the thorax. However, such differences have seldom been previously directly correlated with outcomes of lung stereotactic ablative body radiation (SABR). Methods and Materials: Data for 201 non-small cell lung cancer patients treated with SABR were analyzed retrospectively. All patients were treated with 50 Gy in 5 fractions of 10 Gy each. The radiation prescription mandated that 95% of the planning target volume (PTV) receive the prescribed dose. One hundred sixteen patients were planned with BrainLab treatment planning software (TPS) with the PB algorithm and treatedmore » on a Novalis unit. The other 85 were planned on the Pinnacle TPS with the CCC algorithm and treated on a Varian linac. Treatment planning objectives were numerically identical for both groups. The median follow-up times were 24 and 17 months for the PB and CCC groups, respectively. The primary endpoint was local/marginal control of the irradiated lesion. Gray's competing risk method was used to determine the statistical differences in local/marginal control rates between the PB and CCC groups. Results: Twenty-five patients planned with PB and 4 patients planned with the CCC algorithms to the same nominal doses experienced local recurrence. There was a statistically significant difference in recurrence rates between the PB and CCC groups (hazard ratio 3.4 [95% confidence interval: 1.18-9.83], Gray's test P=.019). The differences (Δ) between the 2 algorithms for target coverage were as follows: ΔD99{sub GITV} = 7.4 Gy, ΔD99{sub PTV} = 10.4 Gy, ΔV90{sub GITV} = 13.7%, ΔV90{sub PTV} = 37.6%, ΔD95{sub PTV} = 9.8 Gy, and ΔD{sub ISO} = 3.4 Gy. GITV = gross internal tumor volume. Conclusions: Local control in patients receiving who were planned to the same nominal dose with PB and CCC algorithms were statistically significantly different. Possible alternative explanations are described in the report, although they are not thought likely to explain the difference. We conclude that the difference is due to relative dosimetric underdosing of tumors with the PB algorithm.« less
NASA Technical Reports Server (NTRS)
Challa, M. S.; Natanson, G. A.; Baker, D. F.; Deutschmann, J. K.
1994-01-01
This paper describes real-time attitude determination results for the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX), a gyroless spacecraft, using a Kalman filter/Euler equation approach denoted the real-time sequential filter (RTSF). The RTSF is an extended Kalman filter whose state vector includes the attitude quaternion and corrections to the rates, which are modeled as Markov processes with small time constants. The rate corrections impart a significant robustness to the RTSF against errors in modeling the environmental and control torques, as well as errors in the initial attitude and rates, while maintaining a small state vector. SAMPLEX flight data from various mission phases are used to demonstrate the robustness of the RTSF against a priori attitude and rate errors of up to 90 deg and 0.5 deg/sec, respectively, as well as a sensitivity of 0.0003 deg/sec in estimating rate corrections in torque computations. In contrast, it is shown that the RTSF attitude estimates without the rate corrections can degrade rapidly. RTSF advantages over single-frame attitude determination algorithms are also demonstrated through (1) substantial improvements in attitude solutions during sun-magnetic field coalignment and (2) magnetic-field-only attitude and rate estimation during the spacecraft's sun-acquisition mode. A robust magnetometer-only attitude-and-rate determination method is also developed to provide for the contingency when both sun data as well as a priori knowledge of the spacecraft state are unavailable. This method includes a deterministic algorithm used to initialize the RTSF with coarse estimates of the spacecraft attitude and rates. The combined algorithm has been found effective, yielding accuracies of 1.5 deg in attitude and 0.01 deg/sec in the rates and convergence times as little as 400 sec.
Suppes, T; Swann, A C; Dennehy, E B; Habermacher, E D; Mason, M; Crismon, M L; Toprac, M G; Rush, A J; Shon, S P; Altshuler, K Z
2001-06-01
Use of treatment guidelines for treatment of major psychiatric illnesses has increased in recent years. The Texas Medication Algorithm Project (TMAP) was developed to study the feasibility and process of developing and implementing guidelines for bipolar disorder, major depressive disorder, and schizophrenia in the public mental health system of Texas. This article describes the consensus process used to develop the first set of TMAP algorithms for the Bipolar Disorder Module (Phase 1) and the trial testing the feasibility of their implementation in inpatient and outpatient psychiatric settings across Texas (Phase 2). The feasibility trial answered core questions regarding implementation of treatment guidelines for bipolar disorder. A total of 69 patients were treated with the original algorithms for bipolar disorder developed in Phase 1 of TMAP. Results support that physicians accepted the guidelines, followed recommendations to see patients at certain intervals, and utilized sequenced treatment steps differentially over the course of treatment. While improvements in clinical symptoms (24-item Brief Psychiatric Rating Scale) were observed over the course of enrollment in the trial, these conclusions are limited by the fact that physician volunteers were utilized for both treatment and ratings. and there was no control group. Results from Phases 1 and 2 indicate that it is possible to develop and implement a treatment guideline for patients with a history of mania in public mental health clinics in Texas. TMAP Phase 3, a recently completed larger and controlled trial assessing the clinical and economic impact of treatment guidelines and patient and family education in the public mental health system of Texas, improves upon this methodology.
NASA Astrophysics Data System (ADS)
Bashardanesh, Zahedeh; Lötstedt, Per
2018-03-01
In diffusion controlled reversible bimolecular reactions in three dimensions, a dissociation step is typically followed by multiple, rapid re-association steps slowing down the simulations of such systems. In order to improve the efficiency, we first derive an exact Green's function describing the rate at which an isolated pair of particles undergoing reversible bimolecular reactions and unimolecular decay separates beyond an arbitrarily chosen distance. Then the Green's function is used in an algorithm for particle-based stochastic reaction-diffusion simulations for prediction of the dynamics of biochemical networks. The accuracy and efficiency of the algorithm are evaluated using a reversible reaction and a push-pull chemical network. The computational work is independent of the rates of the re-associations.
A nonlinear H-infinity approach to optimal control of the depth of anaesthesia
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Rigatou, Efthymia; Zervos, Nikolaos
2016-12-01
Controlling the level of anaesthesia is important for improving the success rate of surgeries and for reducing the risks to which operated patients are exposed. This paper proposes a nonlinear H-infinity approach to optimal control of the level of anaesthesia. The dynamic model of the anaesthesia, which describes the concentration of the anaesthetic drug in different parts of the body, is subjected to linearization at local operating points. These are defined at each iteration of the control algorithm and consist of the present value of the system's state vector and of the last control input that was exerted on it. For this linearization Taylor series expansion is performed and the system's Jacobian matrices are computed. For the linearized model an H-infinity controller is designed. The feedback control gains are found by solving at each iteration of the control algorithm an algebraic Riccati equation. The modelling errors due to this approximate linearization are considered as disturbances which are compensated by the robustness of the control loop. The stability of the control loop is confirmed through Lyapunov analysis.
Tang, Tao; Tian, Jing; Zhong, Daijun; Fu, Chengyu
2016-06-25
A rate feed forward control-based sensor fusion is proposed to improve the closed-loop performance for a charge couple device (CCD) tracking loop. The target trajectory is recovered by combining line of sight (LOS) errors from the CCD and the angular rate from a fiber-optic gyroscope (FOG). A Kalman filter based on the Singer acceleration model utilizes the reconstructive target trajectory to estimate the target velocity. Different from classical feed forward control, additive feedback loops are inevitably added to the original control loops due to the fact some closed-loop information is used. The transfer function of the Kalman filter in the frequency domain is built for analyzing the closed loop stability. The bandwidth of the Kalman filter is the major factor affecting the control stability and close-loop performance. Both simulations and experiments are provided to demonstrate the benefits of the proposed algorithm.
New approach to control the methanogenic reactor of a two-phase anaerobic digestion system.
von Sachs, Jürgen; Meyer, Ulrich; Rys, Paul; Feitkenhauer, Heiko
2003-03-01
A new control strategy for the methanogenic reactor of a two-phase anaerobic digestion system has been developed and successfully tested on the laboratory scale. The control strategy serves the purpose to detect inhibitory effects and to achieve good conversion. The concept is based on the idea that volatile fatty acids (VFA) can be measured in the influent of the methanogenic reactor by means of titration. Thus, information on the output (methane production) and input of the methanogenic reactor is available, and a (carbon) mass balance can be obtained. The control algorithm comprises a proportional/integral structure with the ratio of (a) the methane production rate measured online and (b) a maximum methane production rate expected (derived from the stoichiometry) as a control variable. The manipulated variable is the volumetric feed rate. Results are shown for an experiment with VFA (feed) concentration ramps and for experiments with sodium chloride as inhibitor.
NASA Astrophysics Data System (ADS)
Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong
2017-07-01
The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.
System design of the annular suspension and pointing system /ASPS/
NASA Technical Reports Server (NTRS)
Cunningham, D. C.; Gismondi, T. P.; Wilson, G. W.
1978-01-01
This paper presents the control system design for the Annular Suspension and Pointing System. Actuator sizing and configuration of the system are explained, and the control laws developed for linearizing and compensating the magnetic bearings, roll induction motor and gimbal torquers are given. Decoupling, feedforward and error compensation for the vernier and gimbal controllers is developed. The algorithm for computing the strapdown attitude reference is derived, and the allowable sampling rates, time delays and quantization of control signals are specified.
Resource Allocation and Cross Layer Control in Wireless Networks
2006-08-25
arrival rates lies within the capacity region of the network. The notion of controlling the system to maximize its stability region and the following...optimization problem (4.5) that must be solved at the beginning of 48 Dynamic Control for Network Stability each time slot requires in general knowledge...Dynamic Control for Network Stability ~ (c) ab (t) those of any other feasible algorithm, then for any time t 0; X ic U (c) i (t) "X b ~ (c) ab (t) X
NASA Technical Reports Server (NTRS)
Hanson, Curt; Miller, Chris; Wall, John H.; VanZwieten, Tannen S.; Gilligan, Eric T.; Orr, Jeb S.
2015-01-01
An Adaptive Augmenting Control (AAC) algorithm for the Space Launch System (SLS) has been developed at the Marshall Space Flight Center (MSFC) as part of the launch vehicle's baseline flight control system. A prototype version of the SLS flight control software was hosted on a piloted aircraft at the Armstrong Flight Research Center to demonstrate the adaptive controller on a full-scale realistic application in a relevant flight environment. Concerns regarding adverse interactions between the adaptive controller and a potential manual steering mode were also investigated by giving the pilot trajectory deviation cues and pitch rate command authority, which is the subject of this paper. Two NASA research pilots flew a total of 25 constant pitch rate trajectories using a prototype manual steering mode with and without adaptive control, evaluating six different nominal and off-nominal test case scenarios. Pilot comments and PIO ratings were given following each trajectory and correlated with aircraft state data and internal controller signals post-flight.
Zonnevijlle, Erik D H; Perez-Abadia, Gustavo; Stremel, Richard W; Maldonado, Claudio J; Kon, Moshe; Barker, John H
2003-11-01
Muscle tissue transplantation applied to regain or dynamically assist contractile functions is known as 'dynamic myoplasty'. Success rates of clinical applications are unpredictable, because of lack of endurance, ischemic lesions, abundant scar formation and inadequate performance of tasks due to lack of refined control. Electrical stimulation is used to control dynamic myoplasties and should be improved to reduce some of these drawbacks. Sequential segmental neuromuscular stimulation improves the endurance and closed-loop control offers refinement in rate of contraction of the muscle, while function-controlling stimulator algorithms present the possibility of performing more complex tasks. An acute feasibility study was performed in anaesthetised dogs combining these techniques. Electrically stimulated gracilis-based neo-sphincters were compared to native sphincters with regard to their ability to maintain continence. Measurements were made during fast bladder pressure changes, static high bladder pressure and slow filling of the bladder, mimicking among others posture changes, lifting heavy objects and diuresis. In general, neo-sphincter and native sphincter performance showed no significant difference during these measurements. However, during high bladder pressures reaching 40 cm H(2)O the neo-sphincters maintained positive pressure gradients, whereas most native sphincters relaxed. During slow filling of the bladder the neo-sphincters maintained a controlled positive pressure gradient for a prolonged time without any form of training. Furthermore, the accuracy of these maintained pressure gradients proved to be within the limits set up by the native sphincters. Refinements using more complicated self-learning function-controlling algorithms proved to be effective also and are briefly discussed. In conclusion, a combination of sequential stimulation, closed-loop control and function-controlling algorithms proved feasible in this dynamic graciloplasty-model. Neo-sphincters were created, which would probably provide an acceptable performance, when the stimulation system could be implanted and further tested. Sizing this technique down to implantable proportions seems to be justified and will enable exploration of the possible benefits.
A Simple Two Aircraft Conflict Resolution Algorithm
NASA Technical Reports Server (NTRS)
Chatterji, Gano B.
1999-01-01
Conflict detection and resolution methods are crucial for distributed air-ground traffic management in which the crew in the cockpit, dispatchers in operation control centers and air traffic controllers in the ground-based air traffic management facilities share information and participate in the traffic flow and traffic control imctions.This paper describes a conflict detection and a conflict resolution method. The conflict detection method predicts the minimum separation and the time-to-go to the closest point of approach by assuming that both the aircraft will continue to fly at their current speeds along their current headings. The conflict resolution method described here is motivated by the proportional navigation algorithm. It generates speed and heading commands to rotate the line-of-sight either clockwise or counter-clockwise for conflict resolution. Once the aircraft achieve a positive range-rate and no further conflict is predicted, the algorithm generates heading commands to turn back the aircraft to their nominal trajectories. The speed commands are set to the optimal pre-resolution speeds. Six numerical examples are presented to demonstrate the conflict detection and resolution method.
Biomarker-guided antibiotic therapy—strengths and limitations
Salluh, Jorge; Martin-Loeches, Ignacio; Póvoa, Pedro
2017-01-01
Biomarkers as C-reactive protein (CRP) and procalcitonin (PCT) emerged as tools to help clinicians to diagnose infection and to properly initiate and define the duration of antibiotic therapy. Several randomized controlled trials, including adult critically ill patients, showed that PCT-guided antibiotic stewardship was repeatedly associated with a decrease in the duration of antibiotic therapy with no apparent harm. There are however some relevant limitations in these trials namely the low rate of compliance of PCT-guided algorithms, the high rate of exclusion (without including common clinical situations and pathogens) and the long duration of antibiotic therapy in control groups. Such limitations weakened the real impact of such algorithms in the clinical decision-making process and strengthened the concept that the initiation and the duration of antibiotic therapy cannot depend solely on a biomarker. Future efforts should address these limitations in order to better clarify the role of biomarkers on the complex and multifactorial issue of antibiotic management and to deeply understand its potential effect on mortality. PMID:28603723
Biomarker-guided antibiotic therapy-strengths and limitations.
Nora, David; Salluh, Jorge; Martin-Loeches, Ignacio; Póvoa, Pedro
2017-05-01
Biomarkers as C-reactive protein (CRP) and procalcitonin (PCT) emerged as tools to help clinicians to diagnose infection and to properly initiate and define the duration of antibiotic therapy. Several randomized controlled trials, including adult critically ill patients, showed that PCT-guided antibiotic stewardship was repeatedly associated with a decrease in the duration of antibiotic therapy with no apparent harm. There are however some relevant limitations in these trials namely the low rate of compliance of PCT-guided algorithms, the high rate of exclusion (without including common clinical situations and pathogens) and the long duration of antibiotic therapy in control groups. Such limitations weakened the real impact of such algorithms in the clinical decision-making process and strengthened the concept that the initiation and the duration of antibiotic therapy cannot depend solely on a biomarker. Future efforts should address these limitations in order to better clarify the role of biomarkers on the complex and multifactorial issue of antibiotic management and to deeply understand its potential effect on mortality.
Installation of automatic control at experimental breeder reactor II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, H.A.; Booty, W.F.; Chick, D.R.
1985-08-01
The Experimental Breeder Reactor II (EBR-II) has been modified to permit automatic control capability. Necessary mechanical and electrical changes were made on a regular control rod position; motor, gears, and controller were replaced. A digital computer system was installed that has the programming capability for varied power profiles. The modifications permit transient testing at EBR-II. Experiments were run that increased power linearly as much as 4 MW/s (16% of initial power of 25 MW(thermal)/s), held power constant, and decreased power at a rate no slower than the increase rate. Thus the performance of the automatic control algorithm, the mechanical andmore » electrical control equipment, and the qualifications of the driver fuel for future power change experiments were all demonstrated.« less
Application of dynamic recurrent neural networks in nonlinear system identification
NASA Astrophysics Data System (ADS)
Du, Yun; Wu, Xueli; Sun, Huiqin; Zhang, Suying; Tian, Qiang
2006-11-01
An adaptive identification method of simple dynamic recurrent neural network (SRNN) for nonlinear dynamic systems is presented in this paper. This method based on the theory that by using the inner-states feed-back of dynamic network to describe the nonlinear kinetic characteristics of system can reflect the dynamic characteristics more directly, deduces the recursive prediction error (RPE) learning algorithm of SRNN, and improves the algorithm by studying topological structure on recursion layer without the weight values. The simulation results indicate that this kind of neural network can be used in real-time control, due to its less weight values, simpler learning algorithm, higher identification speed, and higher precision of model. It solves the problems of intricate in training algorithm and slow rate in convergence caused by the complicate topological structure in usual dynamic recurrent neural network.
Vision-based posture recognition using an ensemble classifier and a vote filter
NASA Astrophysics Data System (ADS)
Ji, Peng; Wu, Changcheng; Xu, Xiaonong; Song, Aiguo; Li, Huijun
2016-10-01
Posture recognition is a very important Human-Robot Interaction (HRI) way. To segment effective posture from an image, we propose an improved region grow algorithm which combining with the Single Gauss Color Model. The experiment shows that the improved region grow algorithm can get the complete and accurate posture than traditional Single Gauss Model and region grow algorithm, and it can eliminate the similar region from the background at the same time. In the posture recognition part, and in order to improve the recognition rate, we propose a CNN ensemble classifier, and in order to reduce the misjudgments during a continuous gesture control, a vote filter is proposed and applied to the sequence of recognition results. Comparing with CNN classifier, the CNN ensemble classifier we proposed can yield a 96.27% recognition rate, which is better than that of CNN classifier, and the proposed vote filter can improve the recognition result and reduce the misjudgments during the consecutive gesture switch.
Comparison of algorithms of testing for use in automated evaluation of sensation.
Dyck, P J; Karnes, J L; Gillen, D A; O'Brien, P C; Zimmerman, I R; Johnson, D M
1990-10-01
Estimates of vibratory detection threshold may be used to detect, characterize, and follow the course of sensory abnormality in neurologic disease. The approach is especially useful in epidemiologic and controlled clinical trials. We studied which algorithm of testing and finding threshold should be used in automatic systems by comparing among algorithms and stimulus conditions for the index finger of healthy subjects and for the great toe of patients with mild neuropathy. Appearance thresholds obtained by linear ramps increasing at a rate less than 4.15 microns/sec provided accurate and repeatable thresholds compared with thresholds obtained by forced-choice testing. These rates would be acceptable if only sensitive sites were studied, but they were too slow for use in automatic testing of insensitive parts. Appearance thresholds obtained by fast linear rates (4.15 or 16.6 microns/sec) overestimated threshold, especially for sensitive parts. Use of the mean of appearance and disappearance thresholds, with the stimulus increasing exponentially at rates of 0.5 or 1.0 just noticeable difference (JND) units per second, and interspersion of null stimuli, Békésy with null stimuli, provided accurate, repeatable, and fast estimates of threshold for sensitive parts. Despite the good performance of Békésy testing, we prefer forced choice for evaluation of the sensation of patients with neuropathy.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.
1986-01-01
High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.
Yovich, John L; Alsbjerg, Birgit; Conceicao, Jason L; Hinchliffe, Peter M; Keane, Kevin N
2016-01-01
The first PIVET algorithm for individualized recombinant follicle stimulating hormone (rFSH) dosing in in vitro fertilization, reported in 2012, was based on age and antral follicle count grading with adjustments for anti-Müllerian hormone level, body mass index, day-2 FSH, and smoking history. In 2007, it was enabled by the introduction of a metered rFSH pen allowing small dosage increments of ~8.3 IU per click. In 2011, a second rFSH pen was introduced allowing more precise dosages of 12.5 IU per click, and both pens with their individual algorithms have been applied continuously at our clinic. The objective of this observational study was to validate the PIVET algorithms pertaining to the two rFSH pens with the aim of collecting ≤15 oocytes and minimizing the risk of ovarian hyperstimulation syndrome. The data set included 2,822 in vitro fertilization stimulations over a 6-year period until April 2014 applying either of the two individualized dosing algorithms and corresponding pens. The main outcome measures were mean oocytes retrieved and resultant embryos designated for transfer or cryopreservation permitted calculation of oocyte and embryo utilization rates. Ensuing pregnancies were tracked until live births, and live birth productivity rates embracing fresh and frozen transfers were calculated. Overall, the results showed that mean oocyte numbers were 10.0 for all women <40 years with 24% requiring rFSH dosages <150 IU. Applying both specific algorithms in our clinic meant that the starting dose was not altered for 79.1% of patients and for 30.1% of those receiving the very lowest rFSH dosages (≤75 IU). Only 0.3% patients were diagnosed with severe ovarian hyperstimulation syndrome, all deemed avoidable due to definable breaches from the protocols. The live birth productivity rates exceeded 50% for women <35 years and was 33.2% for the group aged 35-39 years. Routine use of both algorithms led to only 11.6% of women generating >15 oocytes, significantly lower than recently published data applying conventional dosages (38.2%; P<0.0001). When comparing both specific algorithms to each other, the outcomes were mainly comparable for pregnancy, live birth, and miscarriage rate. However, there were significant differences in relation to number of oocytes retrieved, but the mean for both the algorithms remained well below 15 oocytes. Consequently, application of both these algorithms in our in vitro fertilization clinic allows the use of both the rFSH products, with very similar results, and they can be considered validated on the basis of effectiveness and safety, clearly avoiding ovarian hyperstimulation syndrome.
Sleep Quality Estimation based on Chaos Analysis for Heart Rate Variability
NASA Astrophysics Data System (ADS)
Fukuda, Toshio; Wakuda, Yuki; Hasegawa, Yasuhisa; Arai, Fumihito; Kawaguchi, Mitsuo; Noda, Akiko
In this paper, we propose an algorithm to estimate sleep quality based on a heart rate variability using chaos analysis. Polysomnography(PSG) is a conventional and reliable system to diagnose sleep disorder and to evaluate its severity and therapeatic effect, by estimating sleep quality based on multiple channels. However, a recording process requires a lot of time and a controlled environment for measurement and then an analyzing process of PSG data is hard work because the huge sensed data should be manually evaluated. On the other hand, it is focused that some people make a mistake or cause an accident due to lost of regular sleep and of homeostasis these days. Therefore a simple home system for checking own sleep is required and then the estimation algorithm for the system should be developed. Therefore we propose an algorithm to estimate sleep quality based only on a heart rate variability which can be measured by a simple sensor such as a pressure sensor and an infrared sensor in an uncontrolled environment, by experimentally finding the relationship between chaos indices and sleep quality. The system including the estimation algorithm can inform patterns and quality of own daily sleep to a user, and then the user can previously arranges his life schedule, pays more attention based on sleep results and consult with a doctor.
Identification and Reconfigurable Control of Impaired Multi-Rotor Drones
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Bencomo, Alfredo
2016-01-01
The paper presents an algorithm for control and safe landing of impaired multi-rotor drones when one or more motors fail simultaneously or in any sequence. It includes three main components: an identification block, a reconfigurable control block, and a decisions making block. The identification block monitors each motor load characteristics and the current drawn, based on which the failures are detected. The control block generates the required total thrust and three axis torques for the altitude, horizontal position and/or orientation control of the drone based on the time scale separation and nonlinear dynamic inversion. The horizontal displacement is controlled by modulating the roll and pitch angles. The decision making algorithm maps the total thrust and three torques into the individual motor thrusts based on the information provided by the identification block. The drone continues the mission execution as long as the number of functioning motors provide controllability of it. Otherwise, the controller is switched to the safe mode, which gives up the yaw control, commands a safe landing spot and descent rate while maintaining the horizontal attitude.
Fuzzy support vector machines for adaptive Morse code recognition.
Yang, Cheng-Hong; Jin, Li-Cheng; Chuang, Li-Yeh
2006-11-01
Morse code is now being harnessed for use in rehabilitation applications of augmentative-alternative communication and assistive technology, facilitating mobility, environmental control and adapted worksite access. In this paper, Morse code is selected as a communication adaptive device for persons who suffer from muscle atrophy, cerebral palsy or other severe handicaps. A stable typing rate is strictly required for Morse code to be effective as a communication tool. Therefore, an adaptive automatic recognition method with a high recognition rate is needed. The proposed system uses both fuzzy support vector machines and the variable-degree variable-step-size least-mean-square algorithm to achieve these objectives. We apply fuzzy memberships to each point, and provide different contributions to the decision learning function for support vector machines. Statistical analyses demonstrated that the proposed method elicited a higher recognition rate than other algorithms in the literature.
Lopes Antunes, Ana Carolina; Dórea, Fernanda; Halasa, Tariq; Toft, Nils
2016-05-01
Surveillance systems are critical for accurate, timely monitoring and effective disease control. In this study, we investigated the performance of univariate process monitoring control algorithms in detecting changes in seroprevalence for endemic diseases. We also assessed the effect of sample size (number of sentinel herds tested in the surveillance system) on the performance of the algorithms. Three univariate process monitoring control algorithms were compared: Shewart p Chart(1) (PSHEW), Cumulative Sum(2) (CUSUM) and Exponentially Weighted Moving Average(3) (EWMA). Increases in seroprevalence were simulated from 0.10 to 0.15 and 0.20 over 4, 8, 24, 52 and 104 weeks. Each epidemic scenario was run with 2000 iterations. The cumulative sensitivity(4) (CumSe) and timeliness were used to evaluate the algorithms' performance with a 1% false alarm rate. Using these performance evaluation criteria, it was possible to assess the accuracy and timeliness of the surveillance system working in real-time. The results showed that EWMA and PSHEW had higher CumSe (when compared with the CUSUM) from week 1 until the end of the period for all simulated scenarios. Changes in seroprevalence from 0.10 to 0.20 were more easily detected (higher CumSe) than changes from 0.10 to 0.15 for all three algorithms. Similar results were found with EWMA and PSHEW, based on the median time to detection. Changes in the seroprevalence were detected later with CUSUM, compared to EWMA and PSHEW for the different scenarios. Increasing the sample size 10 fold halved the time to detection (CumSe=1), whereas increasing the sample size 100 fold reduced the time to detection by a factor of 6. This study investigated the performance of three univariate process monitoring control algorithms in monitoring endemic diseases. It was shown that automated systems based on these detection methods identified changes in seroprevalence at different times. Increasing the number of tested herds would lead to faster detection. However, the practical implications of increasing the sample size (such as the costs associated with the disease) should also be taken into account. Copyright © 2016 Elsevier B.V. All rights reserved.
Nutrient Stress Detection in Corn Using Neural Networks and AVIRIS Hyperspectral Imagery
NASA Technical Reports Server (NTRS)
Estep, Lee
2001-01-01
AVIRIS image cube data has been processed for the detection of nutrient stress in corn by both known, ratio-type algorithms and by trained neural networks. The USDA Shelton, NE, ARS Variable Rate Nitrogen Application (VRAT) experimental farm was the site used in the study. Upon application of ANOVA and Dunnett multiple comparsion tests on the outcome of both the neural network processing and the ratio-type algorithm results, it was found that the neural network methodology provides a better overall capability to separate nutrient stressed crops from in-field controls.
GPU Acceleration of DSP for Communication Receivers.
Gunther, Jake; Gunther, Hyrum; Moon, Todd
2017-09-01
Graphics processing unit (GPU) implementations of signal processing algorithms can outperform CPU-based implementations. This paper describes the GPU implementation of several algorithms encountered in a wide range of high-data rate communication receivers including filters, multirate filters, numerically controlled oscillators, and multi-stage digital down converters. These structures are tested by processing the 20 MHz wide FM radio band (88-108 MHz). Two receiver structures are explored: a single channel receiver and a filter bank channelizer. Both run in real time on NVIDIA GeForce GTX 1080 graphics card.
Hammerstrom, Donald J.
2013-10-15
A method for managing the charging and discharging of batteries wherein at least one battery is connected to a battery charger, the battery charger is connected to a power supply. A plurality of controllers in communication with one and another are provided, each of the controllers monitoring a subset of input variables. A set of charging constraints may then generated for each controller as a function of the subset of input variables. A set of objectives for each controller may also be generated. A preferred charge rate for each controller is generated as a function of either the set of objectives, the charging constraints, or both, using an algorithm that accounts for each of the preferred charge rates for each of the controllers and/or that does not violate any of the charging constraints. A current flow between the battery and the battery charger is then provided at the actual charge rate.
1 kHz 2D Visual Motion Sensor Using 20 × 20 Silicon Retina Optical Sensor and DSP Microcontroller.
Liu, Shih-Chii; Yang, MinHao; Steiner, Andreas; Moeckel, Rico; Delbruck, Tobi
2015-04-01
Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.
Performance and policy dimensions in internet routing
NASA Technical Reports Server (NTRS)
Mills, David L.; Boncelet, Charles G.; Elias, John G.; Schragger, Paul A.; Jackson, Alden W.; Thyagarajan, Ajit
1995-01-01
The Internet Routing Project, referred to in this report as the 'Highball Project', has been investigating architectures suitable for networks spanning large geographic areas and capable of very high data rates. The Highball network architecture is based on a high speed crossbar switch and an adaptive, distributed, TDMA scheduling algorithm. The scheduling algorithm controls the instantaneous configuration and swell time of the switch, one of which is attached to each node. In order to send a single burst or a multi-burst packet, a reservation request is sent to all nodes. The scheduling algorithm then configures the switches immediately prior to the arrival of each burst, so it can be relayed immediately without requiring local storage. Reservations and housekeeping information are sent using a special broadcast-spanning-tree schedule. Progress to date in the Highball Project includes the design and testing of a suite of scheduling algorithms, construction of software reservation/scheduling simulators, and construction of a strawman hardware and software implementation. A prototype switch controller and timestamp generator have been completed and are in test. Detailed documentation on the algorithms, protocols and experiments conducted are given in various reports and papers published. Abstracts of this literature are included in the bibliography at the end of this report, which serves as an extended executive summary.
Volitional and Real-Time Control Cursor Based on Eye Movement Decoding Using a Linear Decoding Model
Zhang, Cheng
2016-01-01
The aim of this study is to build a linear decoding model that reveals the relationship between the movement information and the EOG (electrooculogram) data to online control a cursor continuously with blinks and eye pursuit movements. First of all, a blink detection method is proposed to reject a voluntary single eye blink or double-blink information from EOG. Then, a linear decoding model of time series is developed to predict the position of gaze, and the model parameters are calibrated by the RLS (Recursive Least Square) algorithm; besides, the assessment of decoding accuracy is assessed through cross-validation procedure. Additionally, the subsection processing, increment control, and online calibration are presented to realize the online control. Finally, the technology is applied to the volitional and online control of a cursor to hit the multiple predefined targets. Experimental results show that the blink detection algorithm performs well with the voluntary blink detection rate over 95%. Through combining the merits of blinks and smooth pursuit movements, the movement information of eyes can be decoded in good conformity with the average Pearson correlation coefficient which is up to 0.9592, and all signal-to-noise ratios are greater than 0. The novel system allows people to successfully and economically control a cursor online with a hit rate of 98%. PMID:28058044
Real-Time Variable Rate Spraying in Orchards and Vineyards: A Review
NASA Astrophysics Data System (ADS)
Wandkar, Sachin Vilas; Bhatt, Yogesh Chandra; Jain, H. K.; Nalawade, Sachin M.; Pawar, Shashikant G.
2018-06-01
Effective and efficient use of pesticides in the orchards is of concern since many years. With the conventional constant rate sprayers, equal dose of pesticide is applied to each tree. Since, there is great variation in size and shape of each tree in the orchard, trees gets either oversprayed or undersprayed. Real-time variable rate spraying technology offers pesticide application in accordance with tree size. With the help of suitable sensors, tree characteristics such as canopy volume, foliage density, etc. can be acquired and with the micro-processing unit coupled with proper algorithm, flow of electronic proportional valves can be controlled thus, controlling the flow rate of nozzles according to tree characteristics. Also, sensors can help in the detection of spaces in-between trees which allows to control the spray in spaces. Variable rate spraying helps in achieving precision in spraying operation especially inside orchards. This paper reviews the real-time variable rate spraying technology and efforts made by the various researchers for real-time variable application in the orchards and vineyards.
Tang, Tao; Chen, Sisi; Huang, Xuanlin; Yang, Tao; Qi, Bo
2018-01-01
High-performance position control can be improved by the compensation of disturbances for a gear-driven control system. This paper presents a mode-free disturbance observer (DOB) based on sensor-fusion to reduce some errors related disturbances for a gear-driven gimbal. This DOB uses the rate deviation to detect disturbances for implementation of a high-gain compensator. In comparison with the angular position signal the rate deviation between load and motor can exhibits the disturbances exiting in the gear-driven gimbal quickly. Due to high bandwidth of the motor rate closed loop, the inverse model of the plant is not necessary to implement DOB. Besides, this DOB requires neither complex modeling of plant nor the use of additive sensors. Without rate sensors providing angular rate, the rate deviation is easily detected by encoders mounted on the side of motor and load, respectively. Extensive experiments are provided to demonstrate the benefits of the proposed algorithm. PMID:29498643
Tang, Tao; Chen, Sisi; Huang, Xuanlin; Yang, Tao; Qi, Bo
2018-03-02
High-performance position control can be improved by the compensation of disturbances for a gear-driven control system. This paper presents a mode-free disturbance observer (DOB) based on sensor-fusion to reduce some errors related disturbances for a gear-driven gimbal. This DOB uses the rate deviation to detect disturbances for implementation of a high-gain compensator. In comparison with the angular position signal the rate deviation between load and motor can exhibits the disturbances exiting in the gear-driven gimbal quickly. Due to high bandwidth of the motor rate closed loop, the inverse model of the plant is not necessary to implement DOB. Besides, this DOB requires neither complex modeling of plant nor the use of additive sensors. Without rate sensors providing angular rate, the rate deviation is easily detected by encoders mounted on the side of motor and load, respectively. Extensive experiments are provided to demonstrate the benefits of the proposed algorithm.
Real-Time Variable Rate Spraying in Orchards and Vineyards: A Review
NASA Astrophysics Data System (ADS)
Wandkar, Sachin Vilas; Bhatt, Yogesh Chandra; Jain, H. K.; Nalawade, Sachin M.; Pawar, Shashikant G.
2018-02-01
Effective and efficient use of pesticides in the orchards is of concern since many years. With the conventional constant rate sprayers, equal dose of pesticide is applied to each tree. Since, there is great variation in size and shape of each tree in the orchard, trees gets either oversprayed or undersprayed. Real-time variable rate spraying technology offers pesticide application in accordance with tree size. With the help of suitable sensors, tree characteristics such as canopy volume, foliage density, etc. can be acquired and with the micro-processing unit coupled with proper algorithm, flow of electronic proportional valves can be controlled thus, controlling the flow rate of nozzles according to tree characteristics. Also, sensors can help in the detection of spaces in-between trees which allows to control the spray in spaces. Variable rate spraying helps in achieving precision in spraying operation especially inside orchards. This paper reviews the real-time variable rate spraying technology and efforts made by the various researchers for real-time variable application in the orchards and vineyards.
Optimal Control Techniques for ResistiveWall Modes in Tokamaks
NASA Astrophysics Data System (ADS)
Clement, Mitchell Dobbs Pearson
Tokamaks can excite kink modes that can lock or nearly lock to the vacuum vessel wall, and whose rotation frequencies and growth rates vary in time but are generally inversely proportional to the magnetic flux diffusion time of the vacuum vessel wall. This magnetohydrodynamic (MHD) instability is pressure limiting in tokamaks and is called the Resistive Wall Mode (RWM). Future tokamaks that are expected to operate as fusion reactors will be required to maximize plasma pressure in order to maximize fusion performance. The DIII-D tokamak is equipped with electromagnetic control coils, both inside and outside of its vacuum vessel, which create magnetic fields that are small by comparison to the machine's equilibrium field but are able to dynamically counteract the RWM. Presently for RWM feedback, DIII-D uses its interior control coils using a classical proportional gain only controller to achieve high plasma pressure. Future advanced tokamak designs will not likely have the luxury of interior control coils and a proportional gain algorithm is not expected to be effective with external control coils. The computer code VALEN was designed to calculate the performance of an MHD feedback control system in an arbitrary geometry. VALEN models the perturbed magnetic field from a single MHD instability and its interaction with surrounding conducting structures using a finite element approach. A linear quadratic gaussian (LQG) control, or H 2 optimal control, algorithm based on the VALEN model for RWM feedback was developed for use with DIII-D's external control coil set. The algorithm is implemented on a platform that combines a graphics processing unit (GPU) for real-time control computation with low latency digital input/output control hardware and operates in parallel with the DIII-D Plasma Control System (PCS). Simulations and experiments showed that modern control techniques performed better, using 77% less current, than classical techniques when using coils external to the vacuum vessel for RWM feedback. RWM feedback based on VALEN outperformed a classical control algorithm using external coils to suppress the normalized plasma response to a rotating n=1 perturbation applied by internal coils over a range of frequencies. This study describes the design, development and testing of the GPU based control hardware and algorithm along with its performance during experiment and simulation.
Liu, Peiying; Lu, Hanzhang; Filbey, Francesca M.; Pinkham, Amy E.; McAdams, Carrie J.; Adinoff, Bryon; Daliparthi, Vamsi; Cao, Yan
2014-01-01
Phase-Contrast MRI (PC-MRI) is a noninvasive technique to measure blood flow. In particular, global but highly quantitative cerebral blood flow (CBF) measurement using PC-MRI complements several other CBF mapping methods such as arterial spin labeling and dynamic susceptibility contrast MRI by providing a calibration factor. The ability to estimate blood supply in physiological units also lays a foundation for assessment of brain metabolic rate. However, a major obstacle before wider applications of this method is that the slice positioning of the scan, ideally placed perpendicular to the feeding arteries, requires considerable expertise and can present a burden to the operator. In the present work, we proposed that the majority of PC-MRI scans can be positioned using an automatic algorithm, leaving only a small fraction of arteries requiring manual positioning. We implemented and evaluated an algorithm for this purpose based on feature extraction of a survey angiogram, which is of minimal operator dependence. In a comparative test-retest study with 7 subjects, the blood flow measurement using this algorithm showed an inter-session coefficient of variation (CoV) of . The Bland-Altman method showed that the automatic method differs from the manual method by between and , for of the CBF measurements. This is comparable to the variance in CBF measurement using manually-positioned PC MRI alone. In a further application of this algorithm to 157 consecutive subjects from typical clinical cohorts, the algorithm provided successful positioning in 89.7% of the arteries. In 79.6% of the subjects, all four arteries could be planned using the algorithm. Chi-square tests of independence showed that the success rate was not dependent on the age or gender, but the patients showed a trend of lower success rate (p = 0.14) compared to healthy controls. In conclusion, this automatic positioning algorithm could improve the application of PC-MRI in CBF quantification. PMID:24787742
Chen, Bin; Peng, Xiuming; Xie, Tiansheng; Jin, Changzhong; Liu, Fumin; Wu, Nanping
2017-07-01
Currently, there are three algorithms for screening of syphilis: traditional algorithm, reverse algorithm and European Centre for Disease Prevention and Control (ECDC) algorithm. To date, there is not a generally recognized diagnostic algorithm. When syphilis meets HIV, the situation is even more complex. To evaluate their screening performance and impact on the seroprevalence of syphilis in HIV-infected individuals, we conducted a cross-sectional study included 865 serum samples from HIV-infected patients in a tertiary hospital. Every sample (one per patient) was tested with toluidine red unheated serum test (TRUST), T. pallidum particle agglutination assay (TPPA), and Treponema pallidum enzyme immunoassay (TP-EIA) according to the manufacturer's instructions. The results of syphilis serological testing were interpreted following different algorithms respectively. We directly compared the traditional syphilis screening algorithm with the reverse syphilis screening algorithm in this unique population. The reverse algorithm achieved remarkable higher seroprevalence of syphilis than the traditional algorithm (24.9% vs. 14.2%, p < 0.0001). Compared to the reverse algorithm, the traditional algorithm also had a missed serodiagnosis rate of 42.8%. The total percentages of agreement and corresponding kappa values of tradition and ECDC algorithm compared with those of reverse algorithm were as follows: 89.4%,0.668; 99.8%, 0.994. There was a very good strength of agreement between the reverse and the ECDC algorithm. Our results supported the reverse (or ECDC) algorithm in screening of syphilis in HIV-infected populations. In addition, our study demonstrated that screening of HIV-populations using different algorithms may result in a statistically different seroprevalence of syphilis.
Study of efficient video compression algorithms for space shuttle applications
NASA Technical Reports Server (NTRS)
Poo, Z.
1975-01-01
Results are presented of a study on video data compression techniques applicable to space flight communication. This study is directed towards monochrome (black and white) picture communication with special emphasis on feasibility of hardware implementation. The primary factors for such a communication system in space flight application are: picture quality, system reliability, power comsumption, and hardware weight. In terms of hardware implementation, these are directly related to hardware complexity, effectiveness of the hardware algorithm, immunity of the source code to channel noise, and data transmission rate (or transmission bandwidth). A system is recommended, and its hardware requirement summarized. Simulations of the study were performed on the improved LIM video controller which is computer-controlled by the META-4 CPU.
Distributed synchronization control of complex networks with communication constraints.
Xu, Zhenhua; Zhang, Dan; Song, Hongbo
2016-11-01
This paper is concerned with the distributed synchronization control of complex networks with communication constraints. In this work, the controllers communicate with each other through the wireless network, acting as a controller network. Due to the constrained transmission power, techniques such as the packet size reduction and transmission rate reduction schemes are proposed which could help reduce communication load of the controller network. The packet dropout problem is also considered in the controller design since it is often encountered in networked control systems. We show that the closed-loop system can be modeled as a switched system with uncertainties and random variables. By resorting to the switched system approach and some stochastic system analysis method, a new sufficient condition is firstly proposed such that the exponential synchronization is guaranteed in the mean-square sense. The controller gains are determined by using the well-known cone complementarity linearization (CCL) algorithm. Finally, a simulation study is performed, which demonstrates the effectiveness of the proposed design algorithm. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Overview of the Miniature Sensor Technology Integration (MSTI) spacecraft attitude control system
NASA Technical Reports Server (NTRS)
Mcewen, Rob
1994-01-01
Msti2 is a small, 164 kg (362 lb), 3-axis stabilized, low-Earth-orbiting satellite whose mission is missile booster tracking. The spacecraft is actuated by 3 reaction wheels and 12 hot gas thrusters. It carries enough fuel for a projected life of 6 months. The sensor complement consists of a Horizon Sensor, a Sun Sensor, low-rate gyros, and a high rate gyro for despin. The total pointing control error allocation is 6 mRad (.34 Deg), and this is while tracking a target on the Earth's surface. This paper describes the Attitude Control System (ACS) algorithms which include the following: attitude acquisition (despin, Sun and Earth acquisition), attitude determination, attitude control, and linear stability analysis.
TEAM: efficient two-locus epistasis tests in human genome-wide association study.
Zhang, Xiang; Huang, Shunping; Zou, Fei; Wang, Wei
2010-06-15
As a promising tool for identifying genetic markers underlying phenotypic differences, genome-wide association study (GWAS) has been extensively investigated in recent years. In GWAS, detecting epistasis (or gene-gene interaction) is preferable over single locus study since many diseases are known to be complex traits. A brute force search is infeasible for epistasis detection in the genome-wide scale because of the intensive computational burden. Existing epistasis detection algorithms are designed for dataset consisting of homozygous markers and small sample size. In human study, however, the genotype may be heterozygous, and number of individuals can be up to thousands. Thus, existing methods are not readily applicable to human datasets. In this article, we propose an efficient algorithm, TEAM, which significantly speeds up epistasis detection for human GWAS. Our algorithm is exhaustive, i.e. it does not ignore any epistatic interaction. Utilizing the minimum spanning tree structure, the algorithm incrementally updates the contingency tables for epistatic tests without scanning all individuals. Our algorithm has broader applicability and is more efficient than existing methods for large sample study. It supports any statistical test that is based on contingency tables, and enables both family-wise error rate and false discovery rate controlling. Extensive experiments show that our algorithm only needs to examine a small portion of the individuals to update the contingency tables, and it achieves at least an order of magnitude speed up over the brute force approach.
Chen, Qihong; Long, Rong; Quan, Shuhai
2014-01-01
This paper presents a neural network predictive control strategy to optimize power distribution for a fuel cell/ultracapacitor hybrid power system of a robot. We model the nonlinear power system by employing time variant auto-regressive moving average with exogenous (ARMAX), and using recurrent neural network to represent the complicated coefficients of the ARMAX model. Because the dynamic of the system is viewed as operating- state- dependent time varying local linear behavior in this frame, a linear constrained model predictive control algorithm is developed to optimize the power splitting between the fuel cell and ultracapacitor. The proposed algorithm significantly simplifies implementation of the controller and can handle multiple constraints, such as limiting substantial fluctuation of fuel cell current. Experiment and simulation results demonstrate that the control strategy can optimally split power between the fuel cell and ultracapacitor, limit the change rate of the fuel cell current, and so as to extend the lifetime of the fuel cell. PMID:24707206
A nudging data assimilation algorithm for the identification of groundwater pumping
NASA Astrophysics Data System (ADS)
Cheng, Wei-Chen; Kendall, Donald R.; Putti, Mario; Yeh, William W.-G.
2009-08-01
This study develops a nudging data assimilation algorithm for estimating unknown pumping from private wells in an aquifer system using measured data of hydraulic head. The proposed algorithm treats the unknown pumping as an additional sink term in the governing equation of groundwater flow and provides a consistent physical interpretation for pumping rate identification. The algorithm identifies the unknown pumping and, at the same time, reduces the forecast error in hydraulic heads. We apply the proposed algorithm to the Las Posas Groundwater Basin in southern California. We consider the following three pumping scenarios: constant pumping rates, spatially varying pumping rates, and temporally varying pumping rates. We also study the impact of head measurement errors on the proposed algorithm. In the case study we seek to estimate the six unknown pumping rates from private wells using head measurements from four observation wells. The results show an excellent rate of convergence for pumping estimation. The case study demonstrates the applicability, accuracy, and efficiency of the proposed data assimilation algorithm for the identification of unknown pumping in an aquifer system.
A nudging data assimilation algorithm for the identification of groundwater pumping
NASA Astrophysics Data System (ADS)
Cheng, W.; Kendall, D. R.; Putti, M.; Yeh, W. W.
2008-12-01
This study develops a nudging data assimilation algorithm for estimating unknown pumping from private wells in an aquifer system using measurement data of hydraulic head. The proposed algorithm treats the unknown pumping as an additional sink term in the governing equation of groundwater flow and provides a consistently physical interpretation for pumping rate identification. The algorithm identifies unknown pumping and, at the same time, reduces the forecast error in hydraulic heads. We apply the proposed algorithm to the Las Posas Groundwater Basin in southern California. We consider the following three pumping scenarios: constant pumping rate, spatially varying pumping rates, and temporally varying pumping rates. We also study the impact of head measurement errors on the proposed algorithm. In the case study, we seek to estimate the six unknown pumping rates from private wells using head measurements from four observation wells. The results show excellent rate of convergence for pumping estimation. The case study demonstrates the applicability, accuracy, and efficiency of the proposed data assimilation algorithm for the identification of unknown pumping in an aquifer system.
Reinforce: An Ensemble Approach for Inferring PPI Network from AP-MS Data.
Tian, Bo; Duan, Qiong; Zhao, Can; Teng, Ben; He, Zengyou
2017-05-17
Affinity Purification-Mass Spectrometry (AP-MS) is one of the most important technologies for constructing protein-protein interaction (PPI) networks. In this paper, we propose an ensemble method, Reinforce, for inferring PPI network from AP-MS data set. The new algorithm named Reinforce is based on rank aggregation and false discovery rate control. Under the null hypothesis that the interaction scores from different scoring methods are randomly generated, Reinforce follows three steps to integrate multiple ranking results from different algorithms or different data sets. The experimental results show that Reinforce can get more stable and accurate inference results than existing algorithms. The source codes of Reinforce and data sets used in the experiments are available at: https://sourceforge.net/projects/reinforce/.
User's guide to the Fault Inferring Nonlinear Detection System (FINDS) computer program
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.; Satz, H. S.
1988-01-01
Described are the operation and internal structure of the computer program FINDS (Fault Inferring Nonlinear Detection System). The FINDS algorithm is designed to provide reliable estimates for aircraft position, velocity, attitude, and horizontal winds to be used for guidance and control laws in the presence of possible failures in the avionics sensors. The FINDS algorithm was developed with the use of a digital simulation of a commercial transport aircraft and tested with flight recorded data. The algorithm was then modified to meet the size constraints and real-time execution requirements on a flight computer. For the real-time operation, a multi-rate implementation of the FINDS algorithm has been partitioned to execute on a dual parallel processor configuration: one based on the translational dynamics and the other on the rotational kinematics. The report presents an overview of the FINDS algorithm, the implemented equations, the flow charts for the key subprograms, the input and output files, program variable indexing convention, subprogram descriptions, and the common block descriptions used in the program.
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The stochastic, infinite time, discrete output feedback problem for time invariant linear systems is examined. Two sets of sufficient conditions for the existence of a stable, globally optimal solution are presented. An expression for the total change in the cost function due to a change in the feedback gain is obtained. This expression is used to show that a sequence of gains can be obtained by an algorithm, so that the corresponding cost sequence is monotonically decreasing and the corresponding sequence of the cost gradient converges to zero. The algorithm is guaranteed to obtain a critical point of the cost function. The computational steps necessary to implement the algorithm on a computer are presented. The results are applied to a digital outer loop flight control problem. The numerical results for this 13th order problem indicate a rate of convergence considerably faster than two other algorithms used for comparison.
Kumar Sahu, Rabindra; Panda, Sidhartha; Biswal, Ashutosh; Chandra Sekhar, G T
2016-03-01
In this paper, a novel Tilt Integral Derivative controller with Filter (TIDF) is proposed for Load Frequency Control (LFC) of multi-area power systems. Initially, a two-area power system is considered and the parameters of the TIDF controller are optimized using Differential Evolution (DE) algorithm employing an Integral of Time multiplied Absolute Error (ITAE) criterion. The superiority of the proposed approach is demonstrated by comparing the results with some recently published heuristic approaches such as Firefly Algorithm (FA), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) optimized PID controllers for the same interconnected power system. Investigations reveal that proposed TIDF controllers provide better dynamic response compared to PID controller in terms of minimum undershoots and settling times of frequency as well as tie-line power deviations following a disturbance. The proposed approach is also extended to two widely used three area test systems considering nonlinearities such as Generation Rate Constraint (GRC) and Governor Dead Band (GDB). To improve the performance of the system, a Thyristor Controlled Series Compensator (TCSC) is also considered and the performance of TIDF controller in presence of TCSC is investigated. It is observed that system performance improves with the inclusion of TCSC. Finally, sensitivity analysis is carried out to test the robustness of the proposed controller by varying the system parameters, operating condition and load pattern. It is observed that the proposed controllers are robust and perform satisfactorily with variations in operating condition, system parameters and load pattern. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
A reconsideration of negative ratings for network-based recommendation
NASA Astrophysics Data System (ADS)
Hu, Liang; Ren, Liang; Lin, Wenbin
2018-01-01
Recommendation algorithms based on bipartite networks have become increasingly popular, thanks to their accuracy and flexibility. Currently, many of these methods ignore users' negative ratings. In this work, we propose a method to exploit negative ratings for the network-based inference algorithm. We find that negative ratings play a positive role regardless of sparsity of data sets. Furthermore, we improve the efficiency of our method and compare it with the state-of-the-art algorithms. Experimental results show that the present method outperforms the existing algorithms.
Optimal trajectories of aircraft and spacecraft
NASA Technical Reports Server (NTRS)
Miele, A.
1990-01-01
Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful engineering compromise between energy requirements and aerodynamics heating requirements.
A Two-Wheel Observing Mode for the MAP Spacecraft
NASA Technical Reports Server (NTRS)
Starin, Scott R.; ODonnell, James R., Jr.
2001-01-01
The Microwave Anisotropy Probe (MAP) is a follow-on to the Differential Microwave Radiometer (DMR) instrument on the Cosmic Background Explorer (COBE). Due to the MAP project's limited mass, power, and budget, a traditional reliability concept including fully redundant components was not feasible. The MAP design employs selective hardware redundancy, along with backup software modes and algorithms, to improve the odds of mission success. This paper describes the effort to develop a backup control mode, known as Observing II, that will allow the MAP science mission to continue in the event of a failure of one of its three reaction wheel assemblies. This backup science mode requires a change from MAP's nominal zero-momentum control system to a momentum-bias system. In this system, existing thruster-based control modes are used to establish a momentum bias about the sun line sufficient to spin the spacecraft up to the desired scan rate. Natural spacecraft dynamics exhibits spin and nutation similar to the nominal MAP science mode with different relative rotation rates, so the two reaction wheels are used to establish and maintain the desired nutation angle from the sun line. Detailed descriptions of the ObservingII control algorithm and simulation results will be presented, along with the operational considerations of performing the rest of MAP's necessary functions with only two wheels.
NASA Astrophysics Data System (ADS)
Boz, Utku; Basdogan, Ipek
2015-12-01
Structural vibrations is a major cause for noise problems, discomfort and mechanical failures in aerospace, automotive and marine systems, which are mainly composed of plate-like structures. In order to reduce structural vibrations on these structures, active vibration control (AVC) is an effective approach. Adaptive filtering methodologies are preferred in AVC due to their ability to adjust themselves for varying dynamics of the structure during the operation. The filtered-X LMS (FXLMS) algorithm is a simple adaptive filtering algorithm widely implemented in active control applications. Proper implementation of FXLMS requires availability of a reference signal to mimic the disturbance and model of the dynamics between the control actuator and the error sensor, namely the secondary path. However, the controller output could interfere with the reference signal and the secondary path dynamics may change during the operation. This interference problem can be resolved by using an infinite impulse response (IIR) filter which considers feedback of the one or more previous control signals to the controller output and the changing secondary path dynamics can be updated using an online modeling technique. In this paper, IIR filtering based filtered-U LMS (FULMS) controller is combined with online secondary path modeling algorithm to suppress the vibrations of a plate-like structure. The results are validated through numerical and experimental studies. The results show that the FULMS with online secondary path modeling approach has more vibration rejection capabilities with higher convergence rate than the FXLMS counterpart.
Active control of flexible structures using a fuzzy logic algorithm
NASA Astrophysics Data System (ADS)
Cohen, Kelly; Weller, Tanchum; Ben-Asher, Joseph Z.
2002-08-01
This study deals with the development and application of an active control law for the vibration suppression of beam-like flexible structures experiencing transient disturbances. Collocated pairs of sensors/actuators provide active control of the structure. A design methodology for the closed-loop control algorithm based on fuzzy logic is proposed. First, the behavior of the open-loop system is observed. Then, the number and locations of collocated actuator/sensor pairs are selected. The proposed control law, which is based on the principles of passivity, commands the actuator to emulate the behavior of a dynamic vibration absorber. The absorber is tuned to a targeted frequency, whereas the damping coefficient of the dashpot is varied in a closed loop using a fuzzy logic based algorithm. This approach not only ensures inherent stability associated with passive absorbers, but also circumvents the phenomenon of modal spillover. The developed controller is applied to the AFWAL/FIB 10 bar truss. Simulated results using MATLAB© show that the closed-loop system exhibits fairly quick settling times and desirable performance, as well as robustness characteristics. To demonstrate the robustness of the control system to changes in the temporal dynamics of the flexible structure, the transient response to a considerably perturbed plant is simulated. The modal frequencies of the 10 bar truss were raised as well as lowered substantially, thereby significantly perturbing the natural frequencies of vibration. For these cases, too, the developed control law provides adequate settling times and rates of vibrational energy dissipation.
Parameter optimization of electrochemical machining process using black hole algorithm
NASA Astrophysics Data System (ADS)
Singh, Dinesh; Shukla, Rajkamal
2017-12-01
Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.
Algorithmic formulation of control problems in manipulation
NASA Technical Reports Server (NTRS)
Bejczy, A. K.
1975-01-01
The basic characteristics of manipulator control algorithms are discussed. The state of the art in the development of manipulator control algorithms is briefly reviewed. Different end-point control techniques are described together with control algorithms which operate on external sensor (imaging, proximity, tactile, and torque/force) signals in realtime. Manipulator control development at JPL is briefly described and illustrated with several figures. The JPL work pays special attention to the front or operator input end of the control algorithms.
Online adaptive neural control of a robotic lower limb prosthesis
NASA Astrophysics Data System (ADS)
Spanias, J. A.; Simon, A. M.; Finucane, S. B.; Perreault, E. J.; Hargrove, L. J.
2018-02-01
Objective. The purpose of this study was to develop and evaluate an adaptive intent recognition algorithm that continuously learns to incorporate a lower limb amputee’s neural information (acquired via electromyography (EMG)) as they ambulate with a robotic leg prosthesis. Approach. We present a powered lower limb prosthesis that was configured to acquire the user’s neural information and kinetic/kinematic information from embedded mechanical sensors, and identify and respond to the user’s intent. We conducted an experiment with eight transfemoral amputees over multiple days. EMG and mechanical sensor data were collected while subjects using a powered knee/ankle prosthesis completed various ambulation activities such as walking on level ground, stairs, and ramps. Our adaptive intent recognition algorithm automatically transitioned the prosthesis into the different locomotion modes and continuously updated the user’s model of neural data during ambulation. Main results. Our proposed algorithm accurately and consistently identified the user’s intent over multiple days, despite changing neural signals. The algorithm incorporated 96.31% [0.91%] (mean, [standard error]) of neural information across multiple experimental sessions, and outperformed non-adaptive versions of our algorithm—with a 6.66% [3.16%] relative decrease in error rate. Significance. This study demonstrates that our adaptive intent recognition algorithm enables incorporation of neural information over long periods of use, allowing assistive robotic devices to accurately respond to the user’s intent with low error rates.
2016-01-01
This paper presents an algorithm, for use with a Portable Powered Ankle-Foot Orthosis (i.e., PPAFO) that can automatically detect changes in gait modes (level ground, ascent and descent of stairs or ramps), thus allowing for appropriate ankle actuation control during swing phase. An artificial neural network (ANN) algorithm used input signals from an inertial measurement unit and foot switches, that is, vertical velocity and segment angle of the foot. Output from the ANN was filtered and adjusted to generate a final data set used to classify different gait modes. Five healthy male subjects walked with the PPAFO on the right leg for two test scenarios (walking over level ground and up and down stairs or a ramp; three trials per scenario). Success rate was quantified by the number of correctly classified steps with respect to the total number of steps. The results indicated that the proposed algorithm's success rate was high (99.3%, 100%, and 98.3% for level, ascent, and descent modes in the stairs scenario, respectively; 98.9%, 97.8%, and 100% in the ramp scenario). The proposed algorithm continuously detected each step's gait mode with faster timing and higher accuracy compared to a previous algorithm that used a decision tree based on maximizing the reliability of the mode recognition. PMID:28070188
Improved collaborative filtering recommendation algorithm of similarity measure
NASA Astrophysics Data System (ADS)
Zhang, Baofu; Yuan, Baoping
2017-05-01
The Collaborative filtering recommendation algorithm is one of the most widely used recommendation algorithm in personalized recommender systems. The key is to find the nearest neighbor set of the active user by using similarity measure. However, the methods of traditional similarity measure mainly focus on the similarity of user common rating items, but ignore the relationship between the user common rating items and all items the user rates. And because rating matrix is very sparse, traditional collaborative filtering recommendation algorithm is not high efficiency. In order to obtain better accuracy, based on the consideration of common preference between users, the difference of rating scale and score of common items, this paper presents an improved similarity measure method, and based on this method, a collaborative filtering recommendation algorithm based on similarity improvement is proposed. Experimental results show that the algorithm can effectively improve the quality of recommendation, thus alleviate the impact of data sparseness.
A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems
Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik; ...
2017-07-25
Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.
A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik
Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.
NASA Astrophysics Data System (ADS)
Bazhin, V. Yu; Danilov, I. V.; Petrov, P. A.
2018-05-01
During the casting of light alloys and ligatures based on aluminum and magnesium, problems of the qualitative distribution of the metal and its crystallization in the mold arise. To monitor the defects of molds on the casting conveyor, a camera with a resolution of 780 x 580 pixels and a shooting rate of 75 frames per second was selected. Images of molds from casting machines were used as input data for neural network algorithm. On the preparation of a digital database and its analytical evaluation stage, the architecture of the convolutional neural network was chosen for the algorithm. The information flow from the local controller is transferred to the OPC server and then to the SCADA system of foundry. After the training, accuracy of neural network defect recognition was about 95.1% on a validation split. After the training, weight coefficients of the neural network were used on testing split and algorithm had identical accuracy with validation images. The proposed technical solutions make it possible to increase the efficiency of the automated process control system in the foundry by expanding the digital database.
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng
2006-12-01
An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.
Yang, Cheng-Hong; Chuang, Li-Yeh; Lin, Yu-Da
2017-08-01
Detecting epistatic interactions in genome-wide association studies (GWAS) is a computational challenge. Such huge numbers of single-nucleotide polymorphism (SNP) combinations limit the some of the powerful algorithms to be applied to detect the potential epistasis in large-scale SNP datasets. We propose a new algorithm which combines the differential evolution (DE) algorithm with a classification based multifactor-dimensionality reduction (CMDR), termed DECMDR. DECMDR uses the CMDR as a fitness measure to evaluate values of solutions in DE process for scanning the potential statistical epistasis in GWAS. The results indicated that DECMDR outperforms the existing algorithms in terms of detection success rate by the large simulation and real data obtained from the Wellcome Trust Case Control Consortium. For running time comparison, DECMDR can efficient to apply the CMDR to detect the significant association between cases and controls amongst all possible SNP combinations in GWAS. DECMDR is freely available at https://goo.gl/p9sLuJ . chuang@isu.edu.tw or e0955767257@yahoo.com.tw. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Technical Reports Server (NTRS)
Hanson, Curt
2014-01-01
An adaptive augmenting control algorithm for the Space Launch System has been developed at the Marshall Space Flight Center as part of the launch vehicles baseline flight control system. A prototype version of the SLS flight control software was hosted on a piloted aircraft at the Armstrong Flight Research Center to demonstrate the adaptive controller on a full-scale realistic application in a relevant flight environment. Concerns regarding adverse interactions between the adaptive controller and a proposed manual steering mode were investigated by giving the pilot trajectory deviation cues and pitch rate command authority.
Speed-constrained three-axes attitude control using kinematic steering
NASA Astrophysics Data System (ADS)
Schaub, Hanspeter; Piggott, Scott
2018-06-01
Spacecraft attitude control solutions typically are torque-level algorithms that simultaneously control both the attitude and angular velocity tracking errors. In contrast, robotic control solutions are kinematic steering commands where rates are treated as the control variable, and a servo-tracking control subsystem is present to achieve the desired control rates. In this paper kinematic attitude steering controls are developed where an outer control loop establishes a desired angular response history to a tracking error, and an inner control loop tracks the commanded body angular rates. The overall stability relies on the separation principle of the inner and outer control loops which must have sufficiently different response time scales. The benefit is that the outer steering law response can be readily shaped to a desired behavior, such as limiting the approach angular velocity when a large tracking error is corrected. A Modified Rodrigues Parameters implementation is presented that smoothly saturates the speed response. A robust nonlinear body rate servo loop is developed which includes integral feedback. This approach provides a convenient modular framework that makes it simple to interchange outer and inner control loops to readily setup new control implementations. Numerical simulations illustrate the expected performance for an aggressive reorientation maneuver subject to an unknown external torque.
Adaptive image coding based on cubic-spline interpolation
NASA Astrophysics Data System (ADS)
Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien
2014-09-01
It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.
NASA Astrophysics Data System (ADS)
Smits, K. M.; Drumheller, Z. W.; Lee, J. H.; Illangasekare, T. H.; Regnery, J.; Kitanidis, P. K.
2015-12-01
Aquifers around the world show troubling signs of irreversible depletion and seawater intrusion as climate change, population growth, and urbanization lead to reduced natural recharge rates and overuse. Scientists and engineers have begun to revisit the technology of managed aquifer recharge and recovery (MAR) as a means to increase the reliability of the diminishing and increasingly variable groundwater supply. Unfortunately, MAR systems remain wrought with operational challenges related to the quality and quantity of recharged and recovered water stemming from a lack of data-driven, real-time control. This research seeks to develop and validate a general simulation-based control optimization algorithm that relies on real-time data collected though embedded sensors that can be used to ease the operational challenges of MAR facilities. Experiments to validate the control algorithm were conducted at the laboratory scale in a two-dimensional synthetic aquifer under both homogeneous and heterogeneous packing configurations. The synthetic aquifer used well characterized technical sands and the electrical conductivity signal of an inorganic conservative tracer as a surrogate measure for water quality. The synthetic aquifer was outfitted with an array of sensors and an autonomous pumping system. Experimental results verified the feasibility of the approach and suggested that the system can improve the operation of MAR facilities. The dynamic parameter inversion reduced the average error between the simulated and observed pressures between 12.5 and 71.4%. The control optimization algorithm ran smoothly and generated optimal control decisions. Overall, results suggest that with some improvements to the inversion and interpolation algorithms, which can be further advanced through testing with laboratory experiments using sensors, the concept can successfully improve the operation of MAR facilities.
Optimization algorithms for large-scale multireservoir hydropower systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiew, K.L.
Five optimization algorithms were vigorously evaluated based on applications on a hypothetical five-reservoir hydropower system. These algorithms are incremental dynamic programming (IDP), successive linear programing (SLP), feasible direction method (FDM), optimal control theory (OCT) and objective-space dynamic programming (OSDP). The performance of these algorithms were comparatively evaluated using unbiased, objective criteria which include accuracy of results, rate of convergence, smoothness of resulting storage and release trajectories, computer time and memory requirements, robustness and other pertinent secondary considerations. Results have shown that all the algorithms, with the exception of OSDP converge to optimum objective values within 1.0% difference from one another.more » The highest objective value is obtained by IDP, followed closely by OCT. Computer time required by these algorithms, however, differ by more than two orders of magnitude, ranging from 10 seconds in the case of OCT to a maximum of about 2000 seconds for IDP. With a well-designed penalty scheme to deal with state-space constraints, OCT proves to be the most-efficient algorithm based on its overall performance. SLP, FDM, and OCT were applied to the case study of Mahaweli project, a ten-powerplant system in Sri Lanka.« less
Model-based Bayesian signal extraction algorithm for peripheral nerves
NASA Astrophysics Data System (ADS)
Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.
2017-10-01
Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of controlling a prosthetic limb.
Congestion control and routing over satellite networks
NASA Astrophysics Data System (ADS)
Cao, Jinhua
Satellite networks and transmissions find their application in fields of computer communications, telephone communications, television broadcasting, transportation, space situational awareness systems and so on. This thesis mainly focuses on two networking issues affecting satellite networking: network congestion control and network routing optimization. Congestion, which leads to long queueing delays, packet losses or both, is a networking problem that has drawn the attention of many researchers. The goal of congestion control mechanisms is to ensure high bandwidth utilization while avoiding network congestion by regulating the rate at which traffic sources inject packets into a network. In this thesis, we propose a stable congestion controller using data-driven, safe switching control theory to improve the dynamic performance of satellite Transmission Control Protocol/Active Queue Management (TCP/AQM) networks. First, the stable region of the Proportional-Integral (PI) parameters for a nominal model is explored. Then, a PI controller, whose parameters are adaptively tuned by switching among members of a given candidate set, using observed plant data, is presented and compared with some classical AQM policy examples, such as Random Early Detection (RED) and fixed PI control. A new cost detectable switching law with an interval cost function switching algorithm, which improves the performance and also saves the computational cost, is developed and compared with a law commonly used in the switching control literature. Finite-gain stability of the system is proved. A fuzzy logic PI controller is incorporated as a special candidate to achieve good performance at all nominal points with the available set of candidate controllers. Simulations are presented to validate the theory. An effocient routing algorithm plays a key role in optimizing network resources. In this thesis, we briefly analyze Low Earth Orbit (LEO) satellite networks, review the Cross Entropy (CE) method and then develop a novel on-demand routing system named Cross Entropy Accelerated Ant Routing System (CEAARS) for regular constellation LEO satellite networks. By implementing simulations on an Iridium-like satellite network, we compare the proposed CEAARS algorithm with the two approaches to adaptive routing protocols on the Internet: distance-vector (DV) and link-state (LS), as well as with the original Cross Entropy Ant Routing System (CEARS). DV algorithms are based on distributed Bellman Ford algorithm, and LS algorithms are implementation of Dijkstras single source shortest path. The results show that CEAARS not only remarkably improves the convergence speed of achieving optimal or suboptimal paths, but also reduces the number of overhead ants (management packets).
Rainfall Estimation over the Nile Basin using Multi-Spectral, Multi- Instrument Satellite Techniques
NASA Astrophysics Data System (ADS)
Habib, E.; Kuligowski, R.; Sazib, N.; Elshamy, M.; Amin, D.; Ahmed, M.
2012-04-01
Management of Egypt's Aswan High Dam is critical not only for flood control on the Nile but also for ensuring adequate water supplies for most of Egypt since rainfall is scarce over the vast majority of its land area. However, reservoir inflow is driven by rainfall over Sudan, Ethiopia, Uganda, and several other countries from which routine rain gauge data are sparse. Satellite- derived estimates of rainfall offer a much more detailed and timely set of data to form a basis for decisions on the operation of the dam. A single-channel infrared (IR) algorithm is currently in operational use at the Egyptian Nile Forecast Center (NFC). In this study, the authors report on the adaptation of a multi-spectral, multi-instrument satellite rainfall estimation algorithm (Self- Calibrating Multivariate Precipitation Retrieval, SCaMPR) for operational application by NFC over the Nile Basin. The algorithm uses a set of rainfall predictors that come from multi-spectral Infrared cloud top observations and self-calibrate them to a set of predictands that come from the more accurate, but less frequent, Microwave (MW) rain rate estimates. For application over the Nile Basin, the SCaMPR algorithm uses multiple satellite IR channels that have become recently available to NFC from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). Microwave rain rates are acquired from multiple sources such as the Special Sensor Microwave/Imager (SSM/I), the Special Sensor Microwave Imager and Sounder (SSMIS), the Advanced Microwave Sounding Unit (AMSU), the Advanced Microwave Scanning Radiometer on EOS (AMSR-E), and the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm has two main steps: rain/no-rain separation using discriminant analysis, and rain rate estimation using stepwise linear regression. We test two modes of algorithm calibration: real- time calibration with continuous updates of coefficients with newly coming MW rain rates, and calibration using static coefficients that are derived from IR-MW data from past observations. We also compare the SCaMPR algorithm to other global-scale satellite rainfall algorithms (e.g., 'Tropical Rainfall Measuring Mission (TRMM) and other sources' (TRMM-3B42) product, and the National Oceanographic and Atmospheric Administration Climate Prediction Center (NOAA-CPC) CMORPH product. The algorithm has several potential future applications such as: improving the performance accuracy of hydrologic forecasting models over the Nile Basin, and utilizing the enhanced rainfall datasets and better-calibrated hydrologic models to assess the impacts of climate change on the region's water availability using global circulation models and regional climate models.
A Robustly Stabilizing Model Predictive Control Algorithm
NASA Technical Reports Server (NTRS)
Ackmece, A. Behcet; Carson, John M., III
2007-01-01
A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.
Chen, Pang-Chia
2013-01-01
This paper investigates multi-objective controller design approaches for nonlinear boiler-turbine dynamics subject to actuator magnitude and rate constraints. System nonlinearity is handled by a suitable linear parameter varying system representation with drum pressure as the system varying parameter. Variation of the drum pressure is represented by suitable norm-bounded uncertainty and affine dependence on system matrices. Based on linear matrix inequality algorithms, the magnitude and rate constraints on the actuator and the deviations of fluid density and water level are formulated while the tracking abilities on the drum pressure and power output are optimized. Variation ranges of drum pressure and magnitude tracking commands are used as controller design parameters, determined according to the boiler-turbine's operation range. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Anhang Price, Rebecca; Fagbuyi, Daniel; Harris, Racine; Hanfling, Dan; Place, Frederick; Taylor, Todd B; Kellermann, Arthur L
2013-02-01
Self-triage using web-based decision support could be a useful way to encourage appropriate care-seeking behavior and reduce health system surge in epidemics. However, the feasibility and safety of this strategy have not previously been evaluated. To assess the usability and safety of Strategy for Off-site Rapid Triage (SORT) for Kids, a web-based decision support tool designed to translate clinical guidance developed by the Centers for Disease Control and Prevention to help parents and adult caregivers determine if a child with influenza-like illness requires immediate care in an emergency department (ED). Prospective pilot validation study conducted between February 8 and April 30, 2012. Staff who abstracted medical records and made follow-up calls were blinded to the SORT algorithm's assessment of the child's level of risk. Two pediatric emergency departments in the National Capital Region. Convenience sample of 294 parents and adult caregivers who were at least 18 years of age; able to read and speak English; and the parent or legal guardian of a child 18 years or younger presenting to 1 of 2 EDs with signs and symptoms meeting Centers for Disease Control and Prevention criteria for influenza-like illness. Completion of the SORT for Kids survey. Caregiver ratings of the website's usability and the sensitivity of the underlying algorithm for identifying children who required immediate ED management of influenza-like illness, defined as receipt of 1 or more of 5 essential clinical services. Ninety percent of participants reported that the website was "very easy" to understand and use. Ratings did not differ by respondent race, ethnicity, or educational attainment. Of the 15 patients whose initial ED visit met explicit criteria for clinical necessity, the Centers for Disease Control and Prevention algorithm classified 14 as high risk, resulting in an overall sensitivity of 93.3% (exact 95% CI, 68.1%-99.8%). Specificity of the algorithm was poor. This pilot study suggests that web-based decision support to help parents and adult caregivers self-triage children with influenza-like illness is feasible. However, prospective refinement of the clinical algorithm is needed to improve its specificity without compromising patient safety.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286
A Simple Two Aircraft Conflict Resolution Algorithm
NASA Technical Reports Server (NTRS)
Chatterji, Gano B.
2006-01-01
Conflict detection and resolution methods are crucial for distributed air-ground traffic management in which the crew in, the cockpit, dispatchers in operation control centers sad and traffic controllers in the ground-based air traffic management facilities share information and participate in the traffic flow and traffic control functions. This paper describes a conflict detection, and a conflict resolution method. The conflict detection method predicts the minimum separation and the time-to-go to the closest point of approach by assuming that both the aircraft will continue to fly at their current speeds along their current headings. The conflict resolution method described here is motivated by the proportional navigation algorithm, which is often used for missile guidance during the terminal phase. It generates speed and heading commands to rotate the line-of-sight either clockwise or counter-clockwise for conflict resolution. Once the aircraft achieve a positive range-rate and no further conflict is predicted, the algorithm generates heading commands to turn back the aircraft to their nominal trajectories. The speed commands are set to the optimal pre-resolution speeds. Six numerical examples are presented to demonstrate the conflict detection, and the conflict resolution methods.
Statistical Quality Control of Moisture Data in GEOS DAS
NASA Technical Reports Server (NTRS)
Dee, D. P.; Rukhovets, L.; Todling, R.
1999-01-01
A new statistical quality control algorithm was recently implemented in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The final step in the algorithm consists of an adaptive buddy check that either accepts or rejects outlier observations based on a local statistical analysis of nearby data. A basic assumption in any such test is that the observed field is spatially coherent, in the sense that nearby data can be expected to confirm each other. However, the buddy check resulted in excessive rejection of moisture data, especially during the Northern Hemisphere summer. The analysis moisture variable in GEOS DAS is water vapor mixing ratio. Observational evidence shows that the distribution of mixing ratio errors is far from normal. Furthermore, spatial correlations among mixing ratio errors are highly anisotropic and difficult to identify. Both factors contribute to the poor performance of the statistical quality control algorithm. To alleviate the problem, we applied the buddy check to relative humidity data instead. This variable explicitly depends on temperature and therefore exhibits a much greater spatial coherence. As a result, reject rates of moisture data are much more reasonable and homogeneous in time and space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ofek, Y.
1994-05-01
This work describes a new technique, based on exchanging control signals between neighboring nodes, for constructing a stable and fault-tolerant global clock in a distributed system with an arbitrary topology. It is shown that it is possible to construct a global clock reference with time step that is much smaller than the propagation delay over the network's links. The synchronization algorithm ensures that the global clock tick' has a stable periodicity, and therefore, it is possible to tolerate failures of links and clocks that operate faster and/or slower than nominally specified, as well as hard failures. The approach taken inmore » this work is to generate a global clock from the ensemble of the local transmission clocks and not to directly synchronize these high-speed clocks. The steady-state algorithm, which generates the global clock, is executed in hardware by the network interface of each node. At the network interface, it is possible to measure accurately the propagation delay between neighboring nodes with a small error or uncertainty and thereby to achieve global synchronization that is proportional to these error measurements. It is shown that the local clock drift (or rate uncertainty) has only a secondary effect on the maximum global clock rate. The synchronization algorithm can tolerate any physical failure. 18 refs.« less
Discrimination of herbicide-resistant kochia with hyperspectral imaging
NASA Astrophysics Data System (ADS)
Nugent, Paul W.; Shaw, Joseph A.; Jha, Prashant; Scherrer, Bryan; Donelick, Andrew; Kumar, Vipan
2018-01-01
A hyperspectral imager was used to differentiate herbicide-resistant versus herbicide-susceptible biotypes of the agronomic weed kochia, in different crops in the field at the Southern Agricultural Research Center in Huntley, Montana. Controlled greenhouse experiments showed that enough information was captured by the imager to classify plants as either a crop, herbicide-susceptible or herbicide-resistant kochia. The current analysis is developing an algorithm that will work in more uncontrolled outdoor situations. In overcast conditions, the algorithm correctly identified dicamba-resistant kochia, glyphosate-resistant kochia, and glyphosate- and dicamba-susceptible kochia with 67%, 76%, and 80% success rates, respectively.
False Discovery Control in Large-Scale Spatial Multiple Testing
Sun, Wenguang; Reich, Brian J.; Cai, T. Tony; Guindani, Michele; Schwartzman, Armin
2014-01-01
Summary This article develops a unified theoretical and computational framework for false discovery control in multiple testing of spatial signals. We consider both point-wise and cluster-wise spatial analyses, and derive oracle procedures which optimally control the false discovery rate, false discovery exceedance and false cluster rate, respectively. A data-driven finite approximation strategy is developed to mimic the oracle procedures on a continuous spatial domain. Our multiple testing procedures are asymptotically valid and can be effectively implemented using Bayesian computational algorithms for analysis of large spatial data sets. Numerical results show that the proposed procedures lead to more accurate error control and better power performance than conventional methods. We demonstrate our methods for analyzing the time trends in tropospheric ozone in eastern US. PMID:25642138
Guo, Zongyi; Chang, Jing; Guo, Jianguo; Zhou, Jun
2018-06-01
This paper focuses on the adaptive twisting sliding mode control for the Hypersonic Reentry Vehicles (HRVs) attitude tracking issue. The HRV attitude tracking model is transformed into the error dynamics in matched structure, whereas an unmeasurable state is redefined by lumping the existing unmatched disturbance with the angular rate. Hence, an adaptive finite-time observer is used to estimate the unknown state. Then, an adaptive twisting algorithm is proposed for systems subject to disturbances with unknown bounds. The stability of the proposed observer-based adaptive twisting approach is guaranteed, and the case of noisy measurement is analyzed. Also, the developed control law avoids the aggressive chattering phenomenon of the existing adaptive twisting approaches because the adaptive gains decrease close to the disturbance once the trajectories reach the sliding surface. Finally, numerical simulations on the attitude control of the HRV are conducted to verify the effectiveness and benefit of the proposed approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Attitude control system of the Delfi-n3Xt satellite
NASA Astrophysics Data System (ADS)
Reijneveld, J.; Choukroun, D.
2013-12-01
This work is concerned with the development of the attitude control algorithms that will be implemented on board of the Delfi-n3xt nanosatellite, which is to be launched in 2013. One of the mission objectives is to demonstrate Sun pointing and three axis stabilization. The attitude control modes and the associated algorithms are described. The control authority is shared between three body-mounted magnetorquers (MTQ) and three orthogonal reaction wheels. The attitude information is retrieved from Sun vector measurements, Earth magnetic field measurements, and gyro measurements. The design of the control is achieved as a trade between simplicity and performance. Stabilization and Sun pointing are achieved via the successive application of the classical Bdot control law and a quaternion feedback control. For the purpose of Sun pointing, a simple quaternion estimation scheme is implemented based on geometric arguments, where the need for a costly optimal filtering algorithm is alleviated, and a single line of sight (LoS) measurement is required - here the Sun vector. Beyond the three-axis Sun pointing mode, spinning Sun pointing modes are also described and used as demonstration modes. The three-axis Sun pointing mode requires reaction wheels and magnetic control while the spinning control modes are implemented with magnetic control only. In addition, a simple scheme for angular rates estimation using Sun vector and Earth magnetic measurements is tested in the case of gyro failures. The various control modes performances are illustrated via extensive simulations over several orbits time spans. The simulated models of the dynamical space environment, of the attitude hardware, and the onboard controller logic are using realistic assumptions. All control modes satisfy the minimal Sun pointing requirements allowed for power generation.
Input-output oriented computation algorithms for the control of large flexible structures
NASA Technical Reports Server (NTRS)
Minto, K. D.
1989-01-01
An overview is given of work in progress aimed at developing computational algorithms addressing two important aspects in the control of large flexible space structures; namely, the selection and placement of sensors and actuators, and the resulting multivariable control law design problem. The issue of sensor/actuator set selection is particularly crucial to obtaining a satisfactory control design, as clearly a poor choice will inherently limit the degree to which good control can be achieved. With regard to control law design, the researchers are driven by concerns stemming from the practical issues associated with eventual implementation of multivariable control laws, such as reliability, limit protection, multimode operation, sampling rate selection, processor throughput, etc. Naturally, the burden imposed by dealing with these aspects of the problem can be reduced by ensuring that the complexity of the compensator is minimized. Our approach to these problems is based on extensions to input/output oriented techniques that have proven useful in the design of multivariable control systems for aircraft engines. In particular, researchers are exploring the use of relative gain analysis and the condition number as a means of quantifying the process of sensor/actuator selection and placement for shape control of a large space platform.
Real-time demonstration hardware for enhanced DPCM video compression algorithm
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.
1992-01-01
The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).
Wang, Wendy T J; Olson, Sharon L; Campbell, Anne H; Hanten, William P; Gleeson, Peggy B
2003-03-01
The purpose of this study was to determine the effectiveness of an individualized physical therapy intervention in treating neck pain based on a clinical reasoning algorithm. Treatment effectiveness was examined by assessing changes in impairment, physical performance, and disability in response to intervention. One treatment group of 30 patients with neck pain completed physical therapy treatment. The control group of convenience was formed by a cohort group of 27 subjects who also had neck pain but did not receive treatment for various reasons. There were no significant differences between groups in demographic data and the initial test scores of the outcome measures. A quasi-experimental, nonequivalent, pretest-posttest control group design was used. A physical therapist rendered an eclectic intervention to the treatment group based on a clinical decision-making algorithm. Treatment outcome measures included the following five dependent variables: cervical range of motion, numeric pain rating, timed weighted overhead endurance, the supine capital flexion endurance test, and the Patient Specific Functional Scale. Both the treatment and control groups completed the initial and follow-up examinations, with an average duration of 4 wk between tests. Five mixed analyses of variance with follow-up tests showed a significant difference for all outcome measures in the treatment group compared with the control group. After an average 4 wk of physical therapy intervention, patients in the treatment group demonstrated statistically significant increases of cervical range of motion, decrease of pain, increases of physical performance measures, and decreases in the level of disability. The control group showed no differences in all five outcome variables between the initial and follow-up test scores. This study delineated algorithm-based clinical reasoning strategies for evaluating and treating patients with cervical pain. The algorithm can help clinicians classify patients with cervical pain into clinical patterns and provides pattern-specific guidelines for physical therapy interventions. An organized and specific physical therapy program was effective in improving the status of patients with neck pain.
NASA Technical Reports Server (NTRS)
Comstock, James R., Jr.; Ghatas, Rania W.; Consiglio, Maria C.; Chamberlain, James P.; Hoffler, Keith D.
2015-01-01
This study evaluated the effects of communications delays and winds on air traffic controller ratings of acceptability of horizontal miss distances (HMDs) for encounters between Unmanned Aircraft Systems (UAS) and manned aircraft in a simulation of the Dallas-Ft. Worth (DFW) airspace. Fourteen encounters per hour were staged in the presence of moderate background traffic. Seven recently retired controllers with experience at DFW served as subjects. Guidance provided to the UAS pilots for maintaining a given HMD was provided by information from Detect and Avoid (DAA) self-separation algorithms (Stratway+) displayed on the Multi-Aircraft Control System. This guidance consisted of amber "bands" on the heading scale of the UAS navigation display indicating headings that would result in a loss of well clear between the UAS and nearby traffic. Winds tested were successfully handled by the DAA algorithms and did not affect the controller acceptability ratings of the HMDs. Voice communications delays for the UAS were also tested and included one-way delay times of 0, 400, 1200, and 1800 msec. For longer communications delays, there were changes in strategy and communications flow that were observed and reported by the controllers. The aim of this work is to provide useful information for guiding future rules and regulations applicable to flying UAS in the NAS. Information from this study will also be of value to the Radio Technical Commission for Aeronautics (RTCA) Special Committee 228 - Minimum Performance Standards for UAS.
An approach of traffic signal control based on NLRSQP algorithm
NASA Astrophysics Data System (ADS)
Zou, Yuan-Yang; Hu, Yu
2017-11-01
This paper presents a linear program model with linear complementarity constraints (LPLCC) to solve traffic signal optimization problem. The objective function of the model is to obtain the minimization of total queue length with weight factors at the end of each cycle. Then, a combination algorithm based on the nonlinear least regression and sequence quadratic program (NLRSQP) is proposed, by which the local optimal solution can be obtained. Furthermore, four numerical experiments are proposed to study how to set the initial solution of the algorithm that can get a better local optimal solution more quickly. In particular, the results of numerical experiments show that: The model is effective for different arrival rates and weight factors; and the lower bound of the initial solution is, the better optimal solution can be obtained.
Genetics-based control of a mimo boiler-turbine plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimeo, R.M.; Lee, K.Y.
1994-12-31
A genetic algorithm is used to develop an optimal controller for a non-linear, multi-input/multi-output boiler-turbine plant. The algorithm is used to train a control system for the plant over a wide operating range in an effort to obtain better performance. The results of the genetic algorithm`s controller designed from the linearized plant model at a nominal operating point. Because the genetic algorithm is well-suited to solving traditionally difficult optimization problems it is found that the algorithm is capable of developing the controller based on input/output information only. This controller achieves a performance comparable to the standard linear quadratic regulator.
Shen, Yanyan; Wang, Shuqiang; Wei, Zhiming
2014-01-01
Dynamic spectrum sharing has drawn intensive attention in cognitive radio networks. The secondary users are allowed to use the available spectrum to transmit data if the interference to the primary users is maintained at a low level. Cooperative transmission for secondary users can reduce the transmission power and thus improve the performance further. We study the joint subchannel pairing and power allocation problem in relay-based cognitive radio networks. The objective is to maximize the sum rate of the secondary user that is helped by an amplify-and-forward relay. The individual power constraints at the source and the relay, the subchannel pairing constraints, and the interference power constraints are considered. The problem under consideration is formulated as a mixed integer programming problem. By the dual decomposition method, a joint optimal subchannel pairing and power allocation algorithm is proposed. To reduce the computational complexity, two suboptimal algorithms are developed. Simulations have been conducted to verify the performance of the proposed algorithms in terms of sum rate and average running time under different conditions.
Multi-pass encoding of hyperspectral imagery with spectral quality control
NASA Astrophysics Data System (ADS)
Wasson, Steven; Walker, William
2015-05-01
Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).
NASA Astrophysics Data System (ADS)
Wang, Shengling; Cui, Yong; Koodli, Rajeev; Hou, Yibin; Huang, Zhangqin
Due to the dynamics of topology and resources, Call Admission Control (CAC) plays a significant role for increasing resource utilization ratio and guaranteeing users' QoS requirements in wireless/mobile networks. In this paper, a dynamic multi-threshold CAC scheme is proposed to serve multi-class service in a wireless/mobile network. The thresholds are renewed at the beginning of each time interval to react to the changing mobility rate and network load. To find suitable thresholds, a reward-penalty model is designed, which provides different priorities between different service classes and call types through different reward/penalty policies according to network load and average call arrival rate. To speed up the running time of CAC, an Optimized Genetic Algorithm (OGA) is presented, whose components such as encoding, population initialization, fitness function and mutation etc., are all optimized in terms of the traits of the CAC problem. The simulation demonstrates that the proposed CAC scheme outperforms the similar schemes, which means the optimization is realized. Finally, the simulation shows the efficiency of OGA.
Dynamic variable selection in SNP genotype autocalling from APEX microarray data.
Podder, Mohua; Welch, William J; Zamar, Ruben H; Tebbutt, Scott J
2006-11-30
Single nucleotide polymorphisms (SNPs) are DNA sequence variations, occurring when a single nucleotide--adenine (A), thymine (T), cytosine (C) or guanine (G)--is altered. Arguably, SNPs account for more than 90% of human genetic variation. Our laboratory has developed a highly redundant SNP genotyping assay consisting of multiple probes with signals from multiple channels for a single SNP, based on arrayed primer extension (APEX). This mini-sequencing method is a powerful combination of a highly parallel microarray with distinctive Sanger-based dideoxy terminator sequencing chemistry. Using this microarray platform, our current genotype calling system (known as SNP Chart) is capable of calling single SNP genotypes by manual inspection of the APEX data, which is time-consuming and exposed to user subjectivity bias. Using a set of 32 Coriell DNA samples plus three negative PCR controls as a training data set, we have developed a fully-automated genotyping algorithm based on simple linear discriminant analysis (LDA) using dynamic variable selection. The algorithm combines separate analyses based on the multiple probe sets to give a final posterior probability for each candidate genotype. We have tested our algorithm on a completely independent data set of 270 DNA samples, with validated genotypes, from patients admitted to the intensive care unit (ICU) of St. Paul's Hospital (plus one negative PCR control sample). Our method achieves a concordance rate of 98.9% with a 99.6% call rate for a set of 96 SNPs. By adjusting the threshold value for the final posterior probability of the called genotype, the call rate reduces to 94.9% with a higher concordance rate of 99.6%. We also reversed the two independent data sets in their training and testing roles, achieving a concordance rate up to 99.8%. The strength of this APEX chemistry-based platform is its unique redundancy having multiple probes for a single SNP. Our model-based genotype calling algorithm captures the redundancy in the system considering all the underlying probe features of a particular SNP, automatically down-weighting any 'bad data' corresponding to image artifacts on the microarray slide or failure of a specific chemistry. In this regard, our method is able to automatically select the probes which work well and reduce the effect of other so-called bad performing probes in a sample-specific manner, for any number of SNPs.
Hu, Jiandong; Ma, Liuzheng; Wang, Shun; Yang, Jianming; Chang, Keke; Hu, Xinran; Sun, Xiaohui; Chen, Ruipeng; Jiang, Min; Zhu, Juanhua; Zhao, Yuanyuan
2015-01-01
Kinetic analysis of biomolecular interactions are powerfully used to quantify the binding kinetic constants for the determination of a complex formed or dissociated within a given time span. Surface plasmon resonance biosensors provide an essential approach in the analysis of the biomolecular interactions including the interaction process of antigen-antibody and receptors-ligand. The binding affinity of the antibody to the antigen (or the receptor to the ligand) reflects the biological activities of the control antibodies (or receptors) and the corresponding immune signal responses in the pathologic process. Moreover, both the association rate and dissociation rate of the receptor to ligand are the substantial parameters for the study of signal transmission between cells. A number of experimental data may lead to complicated real-time curves that do not fit well to the kinetic model. This paper presented an analysis approach of biomolecular interactions established by utilizing the Marquardt algorithm. This algorithm was intensively considered to implement in the homemade bioanalyzer to perform the nonlinear curve-fitting of the association and disassociation process of the receptor to ligand. Compared with the results from the Newton iteration algorithm, it shows that the Marquardt algorithm does not only reduce the dependence of the initial value to avoid the divergence but also can greatly reduce the iterative regression times. The association and dissociation rate constants, ka, kd and the affinity parameters for the biomolecular interaction, KA, KD, were experimentally obtained 6.969×105 mL·g-1·s-1, 0.00073 s-1, 9.5466×108 mL·g-1 and 1.0475×10-9 g·mL-1, respectively from the injection of the HBsAg solution with the concentration of 16ng·mL-1. The kinetic constants were evaluated distinctly by using the obtained data from the curve-fitting results. PMID:26147997
Direct adaptive control of a PUMA 560 industrial robot
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Lee, Thomas; Delpech, Michel
1989-01-01
The implementation and experimental validation of a new direct adaptive control scheme on a PUMA 560 industrial robot is described. The testbed facility consists of a Unimation PUMA 560 six-jointed robot and controller, and a DEC MicroVAX II computer which hosts the Robot Control C Library software. The control algorithm is implemented on the MicroVAX which acts as a digital controller for the PUMA robot, and the Unimation controller is effectively bypassed and used merely as an I/O device to interface the MicroVAX to the joint motors. The control algorithm for each robot joint consists of an auxiliary signal generated by a constant-gain Proportional plus Integral plus Derivative (PID) controller, and an adaptive position-velocity (PD) feedback controller with adjustable gains. The adaptive independent joint controllers compensate for the inter-joint couplings and achieve accurate trajectory tracking without the need for the complex dynamic model and parameter values of the robot. Extensive experimental results on PUMA joint control are presented to confirm the feasibility of the proposed scheme, in spite of strong interactions between joint motions. Experimental results validate the capabilities of the proposed control scheme. The control scheme is extremely simple and computationally very fast for concurrent processing with high sampling rates.
NASA Astrophysics Data System (ADS)
Zhang, Xianxia; Wang, Jian; Qin, Tinggao
2003-09-01
Intelligent control algorithms are introduced into the control system of temperature and humidity. A multi-mode control algorithm of PI-Single Neuron is proposed for single loop control of temperature and humidity. In order to remove the coupling between temperature and humidity, a new decoupling method is presented, which is called fuzzy decoupling. The decoupling is achieved by using a fuzzy controller that dynamically modifies the static decoupling coefficient. Taking the control algorithm of PI-Single Neuron as the single loop control of temperature and humidity, the paper provides the simulated output response curves with no decoupling control, static decoupling control and fuzzy decoupling control. Those control algorithms are easily implemented in singlechip-based hardware systems.
A Multistrategy Optimization Improved Artificial Bee Colony Algorithm
Liu, Wen
2014-01-01
Being prone to the shortcomings of premature and slow convergence rate of artificial bee colony algorithm, an improved algorithm was proposed. Chaotic reverse learning strategies were used to initialize swarm in order to improve the global search ability of the algorithm and keep the diversity of the algorithm; the similarity degree of individuals of the population was used to characterize the diversity of population; population diversity measure was set as an indicator to dynamically and adaptively adjust the nectar position; the premature and local convergence were avoided effectively; dual population search mechanism was introduced to the search stage of algorithm; the parallel search of dual population considerably improved the convergence rate. Through simulation experiments of 10 standard testing functions and compared with other algorithms, the results showed that the improved algorithm had faster convergence rate and the capacity of jumping out of local optimum faster. PMID:24982924
Control of equipment isolation system using wavelet-based hybrid sliding mode control
NASA Astrophysics Data System (ADS)
Huang, Shieh-Kung; Loh, Chin-Hsiung
2017-04-01
Critical non-structural equipment, including life-saving equipment in hospitals, circuit breakers, computers, high technology instrumentations, etc., is vulnerable to strong earthquakes, and on top of that, the failure of the vibration-sensitive equipment will cause severe economic loss. In order to protect vibration-sensitive equipment or machinery against strong earthquakes, various innovative control algorithms are developed to compensate the internal forces that to be applied. These new or improved control strategies, such as the control algorithms based on optimal control theory and sliding mode control (SMC), are also developed for structures engineering as a key element in smart structure technology. The optimal control theory, one of the most common methodologies in feedback control, finds control forces through achieving a certain optimal criterion by minimizing a cost function. For example, the linear-quadratic regulator (LQR) was the most popular control algorithm over the past three decades, and a number of modifications have been proposed to increase the efficiency of classical LQR algorithm. However, except to the advantage of simplicity and ease of implementation, LQR are susceptible to parameter uncertainty and modeling error due to complex nature of civil structures. Different from LQR control, a robust and easy to be implemented control algorithm, SMC has also been studied. SMC is a nonlinear control methodology that forces the structural system to slide along surfaces or boundaries; hence this control algorithm is naturally robust with respect to parametric uncertainties of a structure. Early attempts at protecting vibration-sensitive equipment were based on the use of existing control algorithms as described above. However, in recent years, researchers have tried to renew the existing control algorithms or developing a new control algorithm to adapt the complex nature of civil structures which include the control of both structures and non-structural components. The aim of this paper is to develop a hybrid control algorithm on the control of both structures and equipments simultaneously to overcome the limitations of classical feedback control through combining the advantage of classic LQR and SMC. To suppress vibrations with the frequency contents of strong earthquakes differing from the natural frequencies of civil structures, the hybrid control algorithms integrated with the wavelet-base vibration control algorithm is developed. The performance of classical, hybrid, and wavelet-based hybrid control algorithms as well as the responses of structure and non-structural components are evaluated and discussed through numerical simulation in this study.
Bouchard, M
2001-01-01
In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.
Development of model reference adaptive control theory for electric power plant control applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mabius, L.E.
1982-09-15
The scope of this effort includes the theoretical development of a multi-input, multi-output (MIMO) Model Reference Control (MRC) algorithm, (i.e., model following control law), Model Reference Adaptive Control (MRAC) algorithm and the formulation of a nonlinear model of a typical electric power plant. Previous single-input, single-output MRAC algorithm designs have been generalized to MIMO MRAC designs using the MIMO MRC algorithm. This MRC algorithm, which has been developed using Command Generator Tracker methodologies, represents the steady state behavior (in the adaptive sense) of the MRAC algorithm. The MRC algorithm is a fundamental component in the MRAC design and stability analysis.more » An enhanced MRC algorithm, which has been developed for systems with more controls than regulated outputs, alleviates the MRC stability constraint of stable plant transmission zeroes. The nonlinear power plant model is based on the Cromby model with the addition of a governor valve management algorithm, turbine dynamics and turbine interactions with extraction flows. An application of the MRC algorithm to a linearization of this model demonstrates its applicability to power plant systems. In particular, the generated power changes at 7% per minute while throttle pressure and temperature, reheat temperature and drum level are held constant with a reasonable level of control. The enhanced algorithm reduces significantly control fluctuations without modifying the output response.« less
NASA Astrophysics Data System (ADS)
Liu, Jinxin; Chen, Xuefeng; Yang, Liangdong; Gao, Jiawei; Zhang, Xingwu
2017-11-01
In the field of active noise and vibration control (ANVC), a considerable part of unwelcome noise and vibration is resulted from rotational machines, making the spectrum of response signal multiple-frequency. Narrowband filtered-x least mean square (NFXLMS) is a very popular algorithm to suppress such noise and vibration. It has good performance since a priori-knowledge of fundamental frequency of the noise source (called reference frequency) is adopted. However, if the priori-knowledge is inaccurate, the control performance will be dramatically degraded. This phenomenon is called reference frequency mismatch (RFM). In this paper, a novel narrowband ANVC algorithm with orthogonal pair-wise reference frequency regulator is proposed to compensate for the RFM problem. Firstly, the RFM phenomenon in traditional NFXLMS is closely investigated both analytically and numerically. The results show that RFM changes the parameter estimation problem of the adaptive controller into a parameter tracking problem. Then, adaptive sinusoidal oscillators with output rectification are introduced as the reference frequency regulator to compensate for the RFM problem. The simulation results show that the proposed algorithm can dramatically suppress the multiple-frequency noise and vibration with an improved convergence rate whether or not there is RFM. Finally, case studies using experimental data are conducted under the conditions of none, small and large RFM. The shaft radial run-out signal of a rotor test-platform is applied to simulate the primary noise, and an IIR model identified from a real steel structure is applied to simulate the secondary path. The results further verify the robustness and effectiveness of the proposed algorithm.
Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.
2011-01-01
An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s of meters of error. By incorporating it, the enhanced PDG algorithm becomes capable of pinpoint targeting.
A collaborative filtering recommendation algorithm based on weighted SimRank and social trust
NASA Astrophysics Data System (ADS)
Su, Chang; Zhang, Butao
2017-05-01
Collaborative filtering is one of the most widely used recommendation technologies, but the data sparsity and cold start problem of collaborative filtering algorithms are difficult to solve effectively. In order to alleviate the problem of data sparsity in collaborative filtering algorithm, firstly, a weighted improved SimRank algorithm is proposed to compute the rating similarity between users in rating data set. The improved SimRank can find more nearest neighbors for target users according to the transmissibility of rating similarity. Then, we build trust network and introduce the calculation of trust degree in the trust relationship data set. Finally, we combine rating similarity and trust to build a comprehensive similarity in order to find more appropriate nearest neighbors for target user. Experimental results show that the algorithm proposed in this paper improves the recommendation precision of the Collaborative algorithm effectively.
NASA Technical Reports Server (NTRS)
Fehrmann, Elizabeth A.; Kenny, Barbara H.
2004-01-01
The NASA Glenn Research Center (GRC) has been working to advance the technology necessary for a flywheel energy storage system for the past several years. Flywheels offer high efficiency, durability, and near-complete discharge capabilities not produced by typical chemical batteries. These characteristics show flywheels to be an attractive alternative to the more typical energy storage solutions. Flywheels also offer the possibility of combining what are now two separate systems in space applications into one: energy storage, which is currently provided by batteries, and attitude control, which is currently provided by control moment gyroscopes (CMGs) or reaction wheels. To date, NASA Glenn research effort has produced the control algorithms necessary to demonstrate flywheel operation up to a rated speed of 60,000 RPM and the combined operation of two flywheel machines to simultaneously provide energy storage and single axis attitude control. Two position-sensorless algorithms are used to control the motor/generator, one for low (0 to 1200 RPM) speeds and one for high speeds. The algorithm allows the transition from the low speed method to the high speed method, but the transition from the high to low speed method was not originally included. This leads to a limitation in the existing motor/generator control code that does not allow the flywheels to be commanded to zero speed (and back in the negative speed direction) after the initial startup. In a multi-flywheel system providing both energy storage and attitude control to a spacecraft, speed reversal may be necessary.
Controlling false-negative errors in microarray differential expression analysis: a PRIM approach.
Cole, Steve W; Galic, Zoran; Zack, Jerome A
2003-09-22
Theoretical considerations suggest that current microarray screening algorithms may fail to detect many true differences in gene expression (Type II analytic errors). We assessed 'false negative' error rates in differential expression analyses by conventional linear statistical models (e.g. t-test), microarray-adapted variants (e.g. SAM, Cyber-T), and a novel strategy based on hold-out cross-validation. The latter approach employs the machine-learning algorithm Patient Rule Induction Method (PRIM) to infer minimum thresholds for reliable change in gene expression from Boolean conjunctions of fold-induction and raw fluorescence measurements. Monte Carlo analyses based on four empirical data sets show that conventional statistical models and their microarray-adapted variants overlook more than 50% of genes showing significant up-regulation. Conjoint PRIM prediction rules recover approximately twice as many differentially expressed transcripts while maintaining strong control over false-positive (Type I) errors. As a result, experimental replication rates increase and total analytic error rates decline. RT-PCR studies confirm that gene inductions detected by PRIM but overlooked by other methods represent true changes in mRNA levels. PRIM-based conjoint inference rules thus represent an improved strategy for high-sensitivity screening of DNA microarrays. Freestanding JAVA application at http://microarray.crump.ucla.edu/focus
Application of Boiler Op for combustion optimization at PEPCO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maines, P.; Williams, S.; Levy, E.
1997-09-01
Title IV requires the reduction of NOx at all stations within the PEPCO system. To assist PEPCO plant personnel in achieving low heat rates while meeting NOx targets, Lehigh University`s Energy Research Center and PEPCO developed a new combustion optimization software package called Boiler Op. The Boiler Op code contains an expert system, neural networks and an optimization algorithm. The expert system guides the plant engineer through a series of parametric boiler tests, required for the development of a comprehensive boiler database. The data are then analyzed by the neural networks and optimization algorithm to provide results on the boilermore » control settings which result in the best possible heat rate at a target NOx level or produce minimum NOx. Boiler Op has been used at both Potomac River and Morgantown Stations to help PEPCO engineers optimize combustion. With the use of Boiler Op, Morgantown Station operates under low NOx restrictions and continues to achieve record heat rate values, similar to pre-retrofit conditions. Potomac River Station achieves the regulatory NOx limit through the use of Boiler Op recommended control settings and without NOx burners. Importantly, any software like Boiler Op cannot be used alone. Its application must be in concert with human intelligence to ensure unit safety, reliability and accurate data collection.« less
Design and Implementation of a Smart Home System Using Multisensor Data Fusion Technology.
Hsu, Yu-Liang; Chou, Po-Huan; Chang, Hsing-Cheng; Lin, Shyan-Lung; Yang, Shih-Chin; Su, Heng-Yi; Chang, Chih-Chien; Cheng, Yuan-Sheng; Kuo, Yu-Chen
2017-07-15
This paper aims to develop a multisensor data fusion technology-based smart home system by integrating wearable intelligent technology, artificial intelligence, and sensor fusion technology. We have developed the following three systems to create an intelligent smart home environment: (1) a wearable motion sensing device to be placed on residents' wrists and its corresponding 3D gesture recognition algorithm to implement a convenient automated household appliance control system; (2) a wearable motion sensing device mounted on a resident's feet and its indoor positioning algorithm to realize an effective indoor pedestrian navigation system for smart energy management; (3) a multisensor circuit module and an intelligent fire detection and alarm algorithm to realize a home safety and fire detection system. In addition, an intelligent monitoring interface is developed to provide in real-time information about the smart home system, such as environmental temperatures, CO concentrations, communicative environmental alarms, household appliance status, human motion signals, and the results of gesture recognition and indoor positioning. Furthermore, an experimental testbed for validating the effectiveness and feasibility of the smart home system was built and verified experimentally. The results showed that the 3D gesture recognition algorithm could achieve recognition rates for automated household appliance control of 92.0%, 94.8%, 95.3%, and 87.7% by the 2-fold cross-validation, 5-fold cross-validation, 10-fold cross-validation, and leave-one-subject-out cross-validation strategies. For indoor positioning and smart energy management, the distance accuracy and positioning accuracy were around 0.22% and 3.36% of the total traveled distance in the indoor environment. For home safety and fire detection, the classification rate achieved 98.81% accuracy for determining the conditions of the indoor living environment.
Design and Implementation of a Smart Home System Using Multisensor Data Fusion Technology
Chou, Po-Huan; Chang, Hsing-Cheng; Lin, Shyan-Lung; Yang, Shih-Chin; Su, Heng-Yi; Chang, Chih-Chien; Cheng, Yuan-Sheng; Kuo, Yu-Chen
2017-01-01
This paper aims to develop a multisensor data fusion technology-based smart home system by integrating wearable intelligent technology, artificial intelligence, and sensor fusion technology. We have developed the following three systems to create an intelligent smart home environment: (1) a wearable motion sensing device to be placed on residents’ wrists and its corresponding 3D gesture recognition algorithm to implement a convenient automated household appliance control system; (2) a wearable motion sensing device mounted on a resident’s feet and its indoor positioning algorithm to realize an effective indoor pedestrian navigation system for smart energy management; (3) a multisensor circuit module and an intelligent fire detection and alarm algorithm to realize a home safety and fire detection system. In addition, an intelligent monitoring interface is developed to provide in real-time information about the smart home system, such as environmental temperatures, CO concentrations, communicative environmental alarms, household appliance status, human motion signals, and the results of gesture recognition and indoor positioning. Furthermore, an experimental testbed for validating the effectiveness and feasibility of the smart home system was built and verified experimentally. The results showed that the 3D gesture recognition algorithm could achieve recognition rates for automated household appliance control of 92.0%, 94.8%, 95.3%, and 87.7% by the 2-fold cross-validation, 5-fold cross-validation, 10-fold cross-validation, and leave-one-subject-out cross-validation strategies. For indoor positioning and smart energy management, the distance accuracy and positioning accuracy were around 0.22% and 3.36% of the total traveled distance in the indoor environment. For home safety and fire detection, the classification rate achieved 98.81% accuracy for determining the conditions of the indoor living environment. PMID:28714884
Preliminary Design and Analysis of the GIFTS Instrument Pointing System
NASA Technical Reports Server (NTRS)
Zomkowski, Paul P.
2003-01-01
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Instrument is the next generation spectrometer for remote sensing weather satellites. The GIFTS instrument will be used to perform scans of the Earth s atmosphere by assembling a series of field-of- views (FOV) into a larger pattern. Realization of this process is achieved by step scanning the instrument FOV in a contiguous fashion across any desired portion of the visible Earth. A 2.3 arc second pointing stability, with respect to the scanning instrument, must be maintained for the duration of the FOV scan. A star tracker producing attitude data at 100 Hz rate will be used by the autonomous pointing algorithm to precisely track target FOV s on the surface of the Earth. The main objective is to validate the pointing algorithm in the presence of spacecraft disturbances and determine acceptable disturbance limits from expected noise sources. Proof of concept validation of the pointing system algorithm is carried out with a full system simulation developed using Matlab Simulink. Models for the following components function within the full system simulation: inertial reference unit (IRU), attitude control system (ACS), reaction wheels, star tracker, and mirror controller. With the spacecraft orbital position and attitude maintained to within specified limits the pointing algorithm receives quaternion, ephemeris, and initialization data that are used to construct the required mirror pointing commands at a 100 Hz rate. This comprehensive simulation will also aid in obtaining a thorough understanding of spacecraft disturbances and other sources of pointing system errors. Parameter sensitivity studies and disturbance analysis will be used to obtain limits of operability for the GIFTS instrument. The culmination of this simulation development and analysis will be used to validate the specified performance requirements outlined for this instrument.
Deformable structure registration of bladder through surface mapping.
Xiong, Li; Viswanathan, Akila; Stewart, Alexandra J; Haker, Steven; Tempany, Clare M; Chin, Lee M; Cormack, Robert A
2006-06-01
Cumulative dose distributions in fractionated radiation therapy depict the dose to normal tissues and therefore may permit an estimation of the risk of normal tissue complications. However, calculation of these distributions is highly challenging because of interfractional changes in the geometry of patient anatomy. This work presents an algorithm for deformable structure registration of the bladder and the verification of the accuracy of the algorithm using phantom and patient data. In this algorithm, the registration process involves conformal mapping of genus zero surfaces using finite element analysis, and guided by three control landmarks. The registration produces a correspondence between fractions of the triangular meshes used to describe the bladder surface. For validation of the algorithm, two types of balloons were inflated gradually to three times their original size, and several computerized tomography (CT) scans were taken during the process. The registration algorithm yielded a local accuracy of 4 mm along the balloon surface. The algorithm was then applied to CT data of patients receiving fractionated high-dose-rate brachytherapy to the vaginal cuff, with the vaginal cylinder in situ. The patients' bladder filling status was intentionally different for each fraction. The three required control landmark points were identified for the bladder based on anatomy. Out of an Institutional Review Board (IRB) approved study of 20 patients, 3 had radiographically identifiable points near the bladder surface that were used for verification of the accuracy of the registration. The verification point as seen in each fraction was compared with its predicted location based on affine as well as deformable registration. Despite the variation in bladder shape and volume, the deformable registration was accurate to 5 mm, consistently outperforming the affine registration. We conclude that the structure registration algorithm presented works with reasonable accuracy and provides a means of calculating cumulative dose distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Xiong; Viswanathan, Akila; Stewart, Alexandra J.
Cumulative dose distributions in fractionated radiation therapy depict the dose to normal tissues and therefore may permit an estimation of the risk of normal tissue complications. However, calculation of these distributions is highly challenging because of interfractional changes in the geometry of patient anatomy. This work presents an algorithm for deformable structure registration of the bladder and the verification of the accuracy of the algorithm using phantom and patient data. In this algorithm, the registration process involves conformal mapping of genus zero surfaces using finite element analysis, and guided by three control landmarks. The registration produces a correspondence between fractionsmore » of the triangular meshes used to describe the bladder surface. For validation of the algorithm, two types of balloons were inflated gradually to three times their original size, and several computerized tomography (CT) scans were taken during the process. The registration algorithm yielded a local accuracy of 4 mm along the balloon surface. The algorithm was then applied to CT data of patients receiving fractionated high-dose-rate brachytherapy to the vaginal cuff, with the vaginal cylinder in situ. The patients' bladder filling status was intentionally different for each fraction. The three required control landmark points were identified for the bladder based on anatomy. Out of an Institutional Review Board (IRB) approved study of 20 patients, 3 had radiographically identifiable points near the bladder surface that were used for verification of the accuracy of the registration. The verification point as seen in each fraction was compared with its predicted location based on affine as well as deformable registration. Despite the variation in bladder shape and volume, the deformable registration was accurate to 5 mm, consistently outperforming the affine registration. We conclude that the structure registration algorithm presented works with reasonable accuracy and provides a means of calculating cumulative dose distributions.« less
Edelson, Dana P.; Eilevstjønn, Joar; Weidman, Elizabeth K.; Retzer, Elizabeth; Vanden Hoek, Terry L.; Abella, Benjamin S.
2009-01-01
Objective Hyperventilation is both common and detrimental during cardiopulmonary resuscitation (CPR). Chest wall impedance algorithms have been developed to detect ventilations during CPR. However, impedance signals are challenged by noise artifact from multiple sources, including chest compressions. Capnography has been proposed as an alternate method to measure ventilations. We sought to assess and compare the adequacy of these two approaches. Methods Continuous chest wall impedance and capnography were recorded during consecutive in-hospital cardiac arrests. Algorithms utilizing each of these data sources were compared to a manually determined “gold standard” reference ventilation rate. In addition, a combination algorithm, which utilized the highest of the impedance or capnography values in any given minute, was similarly evaluated. Results Data were collected from 37 cardiac arrests, yielding 438 min of data with continuous chest compressions and concurrent recording of impedance and capnography. The manually calculated mean ventilation rate was 13.3±4.3/min. In comparison, the defibrillator’s impedance-based algorithm yielded an average rate of 11.3±4.4/min (p=0.0001) while the capnography rate was 11.7±3.7/min (p=0.0009). There was no significant difference in sensitivity and positive predictive value between the two methods. The combination algorithm rate was 12.4±3.5/min (p=0.02), which yielded the highest fraction of minutes with respiratory rates within 2/min of the reference. The impedance signal was uninterpretable 19.5% of the time, compared with 9.7% for capnography. However, the signals were only simultaneously non-interpretable 0.8% of the time. Conclusions Both the impedance and capnography-based algorithms underestimated the ventilation rate. Reliable ventilation rate determination may require a novel combination of multiple algorithms during resuscitation. PMID:20036047
NASA Astrophysics Data System (ADS)
Sumantri, Bambang; Uchiyama, Naoki; Sano, Shigenori
2016-01-01
In this paper, a new control structure for a quad-rotor helicopter that employs the least squares method is introduced. This proposed algorithm solves the overdetermined problem of the control input for the translational motion of a quad-rotor helicopter. The algorithm allows all six degrees of freedom to be considered to calculate the control input. The sliding mode controller is applied to achieve robust tracking and stabilization. A saturation function is designed around a boundary layer to reduce the chattering phenomenon that is a common problem in sliding mode control. In order to improve the tracking performance, an integral sliding surface is designed. An energy saving effect because of chattering reduction is also evaluated. First, the dynamics of the quad-rotor helicopter is derived by the Newton-Euler formulation for a rigid body. Second, a constant plus proportional reaching law is introduced to increase the reaching rate of the sliding mode controller. Global stability of the proposed control strategy is guaranteed based on the Lyapunov's stability theory. Finally, the robustness and effectiveness of the proposed control system are demonstrated experimentally under wind gusts, and are compared with a regular sliding mode controller, a proportional-differential controller, and a proportional-integral-differential controller.
Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron
2008-01-01
In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.
Recursive optimal pruning with applications to tree structured vector quantizers
NASA Technical Reports Server (NTRS)
Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen
1992-01-01
A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.
An implicit iterative algorithm with a tuning parameter for Itô Lyapunov matrix equations
NASA Astrophysics Data System (ADS)
Zhang, Ying; Wu, Ai-Guo; Sun, Hui-Jie
2018-01-01
In this paper, an implicit iterative algorithm is proposed for solving a class of Lyapunov matrix equations arising in Itô stochastic linear systems. A tuning parameter is introduced in this algorithm, and thus the convergence rate of the algorithm can be changed. Some conditions are presented such that the developed algorithm is convergent. In addition, an explicit expression is also derived for the optimal tuning parameter, which guarantees that the obtained algorithm achieves its fastest convergence rate. Finally, numerical examples are employed to illustrate the effectiveness of the given algorithm.
A demand assignment control in international business satellite communications network
NASA Astrophysics Data System (ADS)
Nohara, Mitsuo; Takeuchi, Yoshio; Takahata, Fumio; Hirata, Yasuo
An experimental system is being developed for use in an international business satellite (IBS) communications network based on demand-assignment (DA) and TDMA techniques. This paper discusses its system design, in particular from the viewpoints of a network configuration, a DA control, and a satellite channel-assignment algorithm. A satellite channel configuration is also presented along with a tradeoff study on transmission rate, HPA output power, satellite resource efficiency, service quality, and so on.
Orion MPCV GN and C End-to-End Phasing Tests
NASA Technical Reports Server (NTRS)
Neumann, Brian C.
2013-01-01
End-to-end integration tests are critical risk reduction efforts for any complex vehicle. Phasing tests are an end-to-end integrated test that validates system directional phasing (polarity) from sensor measurement through software algorithms to end effector response. Phasing tests are typically performed on a fully integrated and assembled flight vehicle where sensors are stimulated by moving the vehicle and the effectors are observed for proper polarity. Orion Multi-Purpose Crew Vehicle (MPCV) Pad Abort 1 (PA-1) Phasing Test was conducted from inertial measurement to Launch Abort System (LAS). Orion Exploration Flight Test 1 (EFT-1) has two end-to-end phasing tests planned. The first test from inertial measurement to Crew Module (CM) reaction control system thrusters uses navigation and flight control system software algorithms to process commands. The second test from inertial measurement to CM S-Band Phased Array Antenna (PAA) uses navigation and communication system software algorithms to process commands. Future Orion flights include Ascent Abort Flight Test 2 (AA-2) and Exploration Mission 1 (EM-1). These flights will include additional or updated sensors, software algorithms and effectors. This paper will explore the implementation of end-to-end phasing tests on a flight vehicle which has many constraints, trade-offs and compromises. Orion PA-1 Phasing Test was conducted at White Sands Missile Range (WSMR) from March 4-6, 2010. This test decreased the risk of mission failure by demonstrating proper flight control system polarity. Demonstration was achieved by stimulating the primary navigation sensor, processing sensor data to commands and viewing propulsion response. PA-1 primary navigation sensor was a Space Integrated Inertial Navigation System (INS) and Global Positioning System (GPS) (SIGI) which has onboard processing, INS (3 accelerometers and 3 rate gyros) and no GPS receiver. SIGI data was processed by GN&C software into thrust magnitude and direction commands. The processing changes through three phases of powered flight: pitchover, downrange and reorientation. The primary inputs to GN&C are attitude position, attitude rates, angle of attack (AOA) and angle of sideslip (AOS). Pitch and yaw attitude and attitude rate responses were verified by using a flight spare SIGI mounted to a 2-axis rate table. AOA and AOS responses were verified by using a data recorded from SIGI movements on a robotic arm located at NASA Johnson Space Center. The data was consolidated and used in an open-loop data input to the SIGI. Propulsion was the Launch Abort System (LAS) Attitude Control Motor (ACM) which consisted of a solid motor with 8 nozzles. Each nozzle has active thrust control by varying throat area with a pintle. LAS ACM pintles are observable through optically transparent nozzle covers. SIGI movements on robot arm, SIGI rate table movements and LAS ACM pintle responses were video recorded as test artifacts for analysis and evaluation. The PA-1 Phasing Test design was determined based on test performance requirements, operational restrictions and EGSE capabilities. This development progressed during different stages. For convenience these development stages are initial, working group, tiger team, Engineering Review Team (ERT) and final.
Zhang, Tao; Zhu, Yongyun; Zhou, Feng; Yan, Yaxiong; Tong, Jinwu
2017-06-17
Initial alignment of the strapdown inertial navigation system (SINS) is intended to determine the initial attitude matrix in a short time with certain accuracy. The alignment accuracy of the quaternion filter algorithm is remarkable, but the convergence rate is slow. To solve this problem, this paper proposes an improved quaternion filter algorithm for faster initial alignment based on the error model of the quaternion filter algorithm. The improved quaternion filter algorithm constructs the K matrix based on the principle of optimal quaternion algorithm, and rebuilds the measurement model by containing acceleration and velocity errors to make the convergence rate faster. A doppler velocity log (DVL) provides the reference velocity for the improved quaternion filter alignment algorithm. In order to demonstrate the performance of the improved quaternion filter algorithm in the field, a turntable experiment and a vehicle test are carried out. The results of the experiments show that the convergence rate of the proposed improved quaternion filter is faster than that of the tradition quaternion filter algorithm. In addition, the improved quaternion filter algorithm also demonstrates advantages in terms of correctness, effectiveness, and practicability.
VANNI, S.; CASATI, C.; MORONI, F.; RISSO, M.; OTTAVIANI, M.; NAZERIAN, P.; GRIFONI, S.; VANNUCCHI, P.
2014-01-01
SUMMARY Vertigo is generally due to a benign disorder, but it is the most common symptom associated with misdiagnosis of stroke. In this pilot study, we preliminarily assessed the diagnostic performance of a structured bedside algorithm to differentiate central from non-central acute vertigo (AV). Adult patients presenting to a single Emergency Department with vertigo were evaluated with STANDING (SponTAneous Nystagmus, Direction, head Impulse test, standiNG) by one of five trained emergency physicians or evaluated ordinarily by the rest of the medical staff (control group). The gold standard was a complete audiologic evaluation by a clinicians who are experts in assessing dizzy patients and neuroimaging. Reliability, sensibility and specificity of STANDING were calculated. Moreover, to evaluate the potential clinical impact of STANDING, neuroimaging and hospitalisation rates were compared with control group. A total of 292 patients were included, and 48 (16.4%) had a diagnosis of central AV. Ninety-eight (33.4%) patients were evaluated with STANDING. The test had good interobserver agreement (k = 0.76), with very high sensitivity (100%, 95%CI 72.3-100%) and specificity (94.3%, 95%CI 90.7-94.3%). Furthermore, hospitalisation and neuroimaging test rates were lower in the STANDING than in the control group (27.6% vs. 50.5% and 31.6% vs. 71.1%, respectively). In conclusion, STANDING seems to be a promising simple structured bedside algorithm that in this preliminary study identified central AV with a very high sensitivity, and was associated with significant reduction of neuroimaging and hospitalisation rates. PMID:25762835
Anti-aliasing algorithm development
NASA Astrophysics Data System (ADS)
Bodrucki, F.; Davis, J.; Becker, J.; Cordell, J.
2017-10-01
In this paper, we discuss the testing image processing algorithms for mitigation of aliasing artifacts under pulsed illumination. Previously sensors were tested, one with a fixed frame rate and one with an adjustable frame rate, which results showed different degrees of operability when subjected to a Quantum Cascade Laser (QCL) laser pulsed at the frame rate of the fixe-rate sensor. We implemented algorithms to allow the adjustable frame-rate sensor to detect the presence of aliasing artifacts, and in response, to alter the frame rate of the sensor. The result was that the sensor output showed a varying laser intensity (beat note) as opposed to a fixed signal level. A MIRAGE Infrared Scene Projector (IRSP) was used to explore the efficiency of the new algorithms, introduction secondary elements into the sensor's field of view.
Linearization of digital derived rate algorithm for use in linear stability analysis
NASA Technical Reports Server (NTRS)
Graham, R. E.; Porada, T. W.
1985-01-01
The digital derived rate (DDR) algorithm is used to calculate the rate of rotation of the Centaur upper-stage rocket. The DDR is highly nonlinear algorithm, and classical linear stability analysis of the spacecraft cannot be performed without linearization. The performance of this rate algorithm is characterized by a gain and phase curve that drop off at the same frequency. This characteristic is desirable for many applications. A linearization technique for the DDR algorithm is investigated. The linearization method is described. Examples of the results of the linearization technique are illustrated, and the effects of linearization are described. A linear digital filter may be used as a substitute for performing classical linear stability analyses, while the DDR itself may be used in time response analysis.
Concurrent design of an RTP chamber and advanced control system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spence, P.; Schaper, C.; Kermani, A.
1995-12-31
A concurrent-engineering approach is applied to the development of an axisymmetric rapid-thermal-processing (RTP) reactor and its associated temperature controller. Using a detailed finite-element thermal model as a surrogate for actual hardware, the authors have developed and tested a multi-input multi-output (MIMO) controller. Closed-loop simulations are performed by linking the control algorithm with the finite-element code. Simulations show that good temperature uniformity is maintained on the wafer during both steady and transient conditions. A numerical study shows the effect of ramp rate, feedback gain, sensor placement, and wafer-emissivity patterns on system performance.
Research on intelligent algorithm of electro - hydraulic servo control system
NASA Astrophysics Data System (ADS)
Wang, Yannian; Zhao, Yuhui; Liu, Chengtao
2017-09-01
In order to adapt the nonlinear characteristics of the electro-hydraulic servo control system and the influence of complex interference in the industrial field, using a fuzzy PID switching learning algorithm is proposed and a fuzzy PID switching learning controller is designed and applied in the electro-hydraulic servo controller. The designed controller not only combines the advantages of the fuzzy control and PID control, but also introduces the learning algorithm into the switching function, which makes the learning of the three parameters in the switching function can avoid the instability of the system during the switching between the fuzzy control and PID control algorithms. It also makes the switch between these two control algorithm more smoother than that of the conventional fuzzy PID.
NASA Astrophysics Data System (ADS)
Abdul Rani, Khairul Najmy; Abdulmalek, Mohamedfareq; A. Rahim, Hasliza; Siew Chin, Neoh; Abd Wahab, Alawiyah
2017-04-01
This research proposes the various versions of modified cuckoo search (MCS) metaheuristic algorithm deploying the strength Pareto evolutionary algorithm (SPEA) multiobjective (MO) optimization technique in rectangular array geometry synthesis. Precisely, the MCS algorithm is proposed by incorporating the Roulette wheel selection operator to choose the initial host nests (individuals) that give better results, adaptive inertia weight to control the positions exploration of the potential best host nests (solutions), and dynamic discovery rate to manage the fraction probability of finding the best host nests in 3-dimensional search space. In addition, the MCS algorithm is hybridized with the particle swarm optimization (PSO) and hill climbing (HC) stochastic techniques along with the standard strength Pareto evolutionary algorithm (SPEA) forming the MCSPSOSPEA and MCSHCSPEA, respectively. All the proposed MCS-based algorithms are examined to perform MO optimization on Zitzler-Deb-Thiele’s (ZDT’s) test functions. Pareto optimum trade-offs are done to generate a set of three non-dominated solutions, which are locations, excitation amplitudes, and excitation phases of array elements, respectively. Overall, simulations demonstrates that the proposed MCSPSOSPEA outperforms other compatible competitors, in gaining a high antenna directivity, small half-power beamwidth (HPBW), low average side lobe level (SLL) suppression, and/or significant predefined nulls mitigation, simultaneously.
Boiler-turbine control system design using a genetic algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimeo, R.; Lee, K.Y.
1995-12-01
This paper discusses the application of a genetic algorithm to control system design for a boiler-turbine plant. In particular the authors study the ability of the genetic algorithm to develop a proportional-integral (PI) controller and a state feedback controller for a non-linear multi-input/multi-output (MIMO) plant model. The plant model is presented along with a discussion of the inherent difficulties in such controller development. A sketch of the genetic algorithm (GA) is presented and its strategy as a method of control system design is discussed. Results are presented for two different control systems that have been designed with the genetic algorithm.
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2017-03-31
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
Vehicle handling and stability control by the cooperative control of 4WS and DYC
NASA Astrophysics Data System (ADS)
Shen, Huan; Tan, Yun-Sheng
2017-07-01
This paper proposes an integrated control system that cooperates with the four-wheel steering (4WS) and direct yaw moment control (DYC) to improve the vehicle handling and stability. The design works of the four-wheel steering and DYC control are based on sliding mode control. The integration control system produces the suitable 4WS angle and corrective yaw moment so that the vehicle tracks the desired yaw rate and sideslip angle. Considering the change of the vehicle longitudinal velocity that means the comfort of driving conditions, both the driving torque and braking torque are used to generate the corrective yaw moment. Simulation results show the effectiveness of the proposed control algorithm.
A permutation-based non-parametric analysis of CRISPR screen data.
Jia, Gaoxiang; Wang, Xinlei; Xiao, Guanghua
2017-07-19
Clustered regularly-interspaced short palindromic repeats (CRISPR) screens are usually implemented in cultured cells to identify genes with critical functions. Although several methods have been developed or adapted to analyze CRISPR screening data, no single specific algorithm has gained popularity. Thus, rigorous procedures are needed to overcome the shortcomings of existing algorithms. We developed a Permutation-Based Non-Parametric Analysis (PBNPA) algorithm, which computes p-values at the gene level by permuting sgRNA labels, and thus it avoids restrictive distributional assumptions. Although PBNPA is designed to analyze CRISPR data, it can also be applied to analyze genetic screens implemented with siRNAs or shRNAs and drug screens. We compared the performance of PBNPA with competing methods on simulated data as well as on real data. PBNPA outperformed recent methods designed for CRISPR screen analysis, as well as methods used for analyzing other functional genomics screens, in terms of Receiver Operating Characteristics (ROC) curves and False Discovery Rate (FDR) control for simulated data under various settings. Remarkably, the PBNPA algorithm showed better consistency and FDR control on published real data as well. PBNPA yields more consistent and reliable results than its competitors, especially when the data quality is low. R package of PBNPA is available at: https://cran.r-project.org/web/packages/PBNPA/ .
Jakob, J; Marenda, D; Sold, M; Schlüter, M; Post, S; Kienle, P
2014-08-01
Complications after cholecystectomy are continuously documented in a nationwide database in Germany. Recent studies demonstrated a lack of reliability of these data. The aim of the study was to evaluate the impact of a control algorithm on documentation quality and the use of routine diagnosis coding as an additional validation instrument. Completeness and correctness of the documentation of complications after cholecystectomy was compared over a time interval of 12 months before and after implementation of an algorithm for faster and more accurate documentation. Furthermore, the coding of all diagnoses was screened to identify intraoperative and postoperative complications. The sensitivity of the documentation for complications improved from 46 % to 70 % (p = 0.05, specificity 98 % in both time intervals). A prolonged time interval of more than 6 weeks between patient discharge and documentation was associated with inferior data quality (incorrect documentation in 1.5 % versus 15 %, p < 0.05). The rate of case documentation within the 6 weeks after hospital discharge was clearly improved after implementation of the control algorithm. Sensitivity and specificity of screening for complications by evaluating routine diagnoses coding were 70 % and 85 %, respectively. The quality of documentation was improved by implementation of a simple memory algorithm.
A comparison of force control algorithms for robots in contact with flexible environments
NASA Technical Reports Server (NTRS)
Wilfinger, Lee S.
1992-01-01
In order to perform useful tasks, the robot end-effector must come into contact with its environment. For such tasks, force feedback is frequently used to control the interaction forces. Control of these forces is complicated by the fact that the flexibility of the environment affects the stability of the force control algorithm. Because of the wide variety of different materials present in everyday environments, it is necessary to gain an understanding of how environmental flexibility affects the stability of force control algorithms. This report presents the theory and experimental results of two force control algorithms: Position Accommodation Control and Direct Force Servoing. The implementation of each of these algorithms on a two-arm robotic test bed located in the Center for Intelligent Robotic Systems for Space Exploration (CIRSSE) is discussed in detail. The behavior of each algorithm when contacting materials of different flexibility is experimentally determined. In addition, several robustness improvements to the Direct Force Servoing algorithm are suggested and experimentally verified. Finally, a qualitative comparison of the force control algorithms is provided, along with a description of a general tuning process for each control method.
Genotyping and inflated type I error rate in genome-wide association case/control studies
Sampson, Joshua N; Zhao, Hongyu
2009-01-01
Background One common goal of a case/control genome wide association study (GWAS) is to find SNPs associated with a disease. Traditionally, the first step in such studies is to assign a genotype to each SNP in each subject, based on a statistic summarizing fluorescence measurements. When the distributions of the summary statistics are not well separated by genotype, the act of genotype assignment can lead to more potential problems than acknowledged by the literature. Results Specifically, we show that the proportions of each called genotype need not equal the true proportions in the population, even as the number of subjects grows infinitely large. The called genotypes for two subjects need not be independent, even when their true genotypes are independent. Consequently, p-values from tests of association can be anti-conservative, even when the distributions of the summary statistic for the cases and controls are identical. To address these problems, we propose two new tests designed to reduce the inflation in the type I error rate caused by these problems. The first algorithm, logiCALL, measures call quality by fully exploring the likelihood profile of intensity measurements, and the second algorithm avoids genotyping by using a likelihood ratio statistic. Conclusion Genotyping can introduce avoidable false positives in GWAS. PMID:19236714
Performance of Activity Classification Algorithms in Free-living Older Adults
Sasaki, Jeffer Eidi; Hickey, Amanda; Staudenmayer, John; John, Dinesh; Kent, Jane A.; Freedson, Patty S.
2015-01-01
Purpose To compare activity type classification rates of machine learning algorithms trained on laboratory versus free-living accelerometer data in older adults. Methods Thirty-five older adults (21F and 14M ; 70.8 ± 4.9 y) performed selected activities in the laboratory while wearing three ActiGraph GT3X+ activity monitors (dominant hip, wrist, and ankle). Monitors were initialized to collect raw acceleration data at a sampling rate of 80 Hz. Fifteen of the participants also wore the GT3X+ in free-living settings and were directly observed for 2-3 hours. Time- and frequency- domain features from acceleration signals of each monitor were used to train Random Forest (RF) and Support Vector Machine (SVM) models to classify five activity types: sedentary, standing, household, locomotion, and recreational activities. All algorithms were trained on lab data (RFLab and SVMLab) and free-living data (RFFL and SVMFL) using 20 s signal sampling windows. Classification accuracy rates of both types of algorithms were tested on free-living data using a leave-one-out technique. Results Overall classification accuracy rates for the algorithms developed from lab data were between 49% (wrist) to 55% (ankle) for the SVMLab algorithms, and 49% (wrist) to 54% (ankle) for RFLab algorithms. The classification accuracy rates for SVMFL and RFFL algorithms ranged from 58% (wrist) to 69% (ankle) and from 61% (wrist) to 67% (ankle), respectively. Conclusion Our algorithms developed on free-living accelerometer data were more accurate in classifying activity type in free-living older adults than our algorithms developed on laboratory accelerometer data. Future studies should consider using free-living accelerometer data to train machine-learning algorithms in older adults. PMID:26673129
Performance of Activity Classification Algorithms in Free-Living Older Adults.
Sasaki, Jeffer Eidi; Hickey, Amanda M; Staudenmayer, John W; John, Dinesh; Kent, Jane A; Freedson, Patty S
2016-05-01
The objective of this study is to compare activity type classification rates of machine learning algorithms trained on laboratory versus free-living accelerometer data in older adults. Thirty-five older adults (21 females and 14 males, 70.8 ± 4.9 yr) performed selected activities in the laboratory while wearing three ActiGraph GT3X+ activity monitors (in the dominant hip, wrist, and ankle; ActiGraph, LLC, Pensacola, FL). Monitors were initialized to collect raw acceleration data at a sampling rate of 80 Hz. Fifteen of the participants also wore GT3X+ in free-living settings and were directly observed for 2-3 h. Time- and frequency-domain features from acceleration signals of each monitor were used to train random forest (RF) and support vector machine (SVM) models to classify five activity types: sedentary, standing, household, locomotion, and recreational activities. All algorithms were trained on laboratory data (RFLab and SVMLab) and free-living data (RFFL and SVMFL) using 20-s signal sampling windows. Classification accuracy rates of both types of algorithms were tested on free-living data using a leave-one-out technique. Overall classification accuracy rates for the algorithms developed from laboratory data were between 49% (wrist) and 55% (ankle) for the SVMLab algorithms and 49% (wrist) to 54% (ankle) for the RFLab algorithms. The classification accuracy rates for SVMFL and RFFL algorithms ranged from 58% (wrist) to 69% (ankle) and from 61% (wrist) to 67% (ankle), respectively. Our algorithms developed on free-living accelerometer data were more accurate in classifying the activity type in free-living older adults than those on our algorithms developed on laboratory accelerometer data. Future studies should consider using free-living accelerometer data to train machine learning algorithms in older adults.
Fast packet switching algorithms for dynamic resource control over ATM networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsang, R.P.; Keattihananant, P.; Chang, T.
1996-12-01
Real-time continuous media traffic, such as digital video and audio, is expected to comprise a large percentage of the network load on future high speed packet switch networks such as ATM. A major feature which distinguishes high speed networks from traditional slower speed networks is the large amount of data the network must process very quickly. For efficient network usage, traffic control mechanisms are essential. Currently, most mechanisms for traffic control (such as flow control) have centered on the support of Available Bit Rate (ABR), i.e., non real-time, traffic. With regard to ATM, for ABR traffic, two major types ofmore » schemes which have been proposed are rate- control and credit-control schemes. Neither of these schemes are directly applicable to Real-time Variable Bit Rate (VBR) traffic such as continuous media traffic. Traffic control for continuous media traffic is an inherently difficult problem due to the time- sensitive nature of the traffic and its unpredictable burstiness. In this study, we present a scheme which controls traffic by dynamically allocating/de- allocating resources among competing VCs based upon their real-time requirements. This scheme incorporates a form of rate- control, real-time burst-level scheduling and link-link flow control. We show analytically potential performance improvements of our rate- control scheme and present a scheme for buffer dimensioning. We also present simulation results of our schemes and discuss the tradeoffs inherent in maintaining high network utilization and statistically guaranteeing many users` Quality of Service.« less
McGinn, Patrick J; MacQuarrie, Scott P; Choi, Jerome; Tartakovsky, Boris
2017-01-01
In this study, production of the microalga Scenedesmus AMDD in a 300 L continuous flow photobioreactor was maximized using an online flow (dilution rate) control algorithm. To enable online control, biomass concentration was estimated in real time by measuring chlorophyll-related culture fluorescence. A simple microalgae growth model was developed and used to solve the optimization problem aimed at maximizing the photobioreactor productivity. When optimally controlled, Scenedesmus AMDD culture demonstrated an average volumetric biomass productivity of 0.11 g L -1 d -1 over a 25 day cultivation period, equivalent to a 70 % performance improvement compared to the same photobioreactor operated as a turbidostat. The proposed approach for optimizing photobioreactor flow can be adapted to a broad range of microalgae cultivation systems.
Li, Zong-Tao; Wu, Tie-Jun; Lin, Can-Long; Ma, Long-Hua
2011-01-01
A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform. PMID:22164058
Controlling laser driven protons acceleration using a deformable mirror at a high repetition rate
NASA Astrophysics Data System (ADS)
Noaman-ul-Haq, M.; Sokollik, T.; Ahmed, H.; Braenzel, J.; Ehrentraut, L.; Mirzaie, M.; Yu, L.-L.; Sheng, Z. M.; Chen, L. M.; Schnürer, M.; Zhang, J.
2018-03-01
We present results from a proof-of-principle experiment to optimize laser driven protons acceleration by directly feeding back its spectral information to a deformable mirror (DM) controlled by evolutionary algorithms (EAs). By irradiating a stable high-repetition rate tape driven target with ultra-intense pulses of intensities ∼1020 W/ cm2, we optimize the maximum energy of the accelerated protons with a stability of less than ∼5% fluctuations near optimum value. Moreover, due to spatio-temporal development of the sheath field, modulations in the spectrum are also observed. Particularly, a prominent narrow peak is observed with a spread of ∼15% (FWHM) at low energy part of the spectrum. These results are helpful to develop high repetition rate optimization techniques required for laser-driven ion accelerators.
A New Item Selection Procedure for Mixed Item Type in Computerized Classification Testing.
ERIC Educational Resources Information Center
Lau, C. Allen; Wang, Tianyou
This paper proposes a new Information-Time index as the basis for item selection in computerized classification testing (CCT) and investigates how this new item selection algorithm can help improve test efficiency for item pools with mixed item types. It also investigates how practical constraints such as item exposure rate control, test…
Brenton, Ashley; Richeimer, Steven; Sharma, Maneesh; Lee, Chee; Kantorovich, Svetlana; Blanchard, John; Meshkin, Brian
2017-01-01
Opioid abuse in chronic pain patients is a major public health issue, with rapidly increasing addiction rates and deaths from unintentional overdose more than quadrupling since 1999. This study seeks to determine the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm incorporating phenotypic risk factors and neuroscience-associated single-nucleotide polymorphisms (SNPs). The Proove Opioid Risk (POR) algorithm determines the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm incorporating phenotypic risk factors and neuroscience-associated SNPs. In a validation study with 258 subjects with diagnosed opioid use disorder (OUD) and 650 controls who reported using opioids, the POR successfully categorized patients at high and moderate risks of opioid misuse or abuse with 95.7% sensitivity. Regardless of changes in the prevalence of opioid misuse or abuse, the sensitivity of POR remained >95%. The POR correctly stratifies patients into low-, moderate-, and high-risk categories to appropriately identify patients at need for additional guidance, monitoring, or treatment changes.
Noise suppression methods for robust speech processing
NASA Astrophysics Data System (ADS)
Boll, S. F.; Ravindra, H.; Randall, G.; Armantrout, R.; Power, R.
1980-05-01
Robust speech processing in practical operating environments requires effective environmental and processor noise suppression. This report describes the technical findings and accomplishments during this reporting period for the research program funded to develop real time, compressed speech analysis synthesis algorithms whose performance in invariant under signal contamination. Fulfillment of this requirement is necessary to insure reliable secure compressed speech transmission within realistic military command and control environments. Overall contributions resulting from this research program include the understanding of how environmental noise degrades narrow band, coded speech, development of appropriate real time noise suppression algorithms, and development of speech parameter identification methods that consider signal contamination as a fundamental element in the estimation process. This report describes the current research and results in the areas of noise suppression using the dual input adaptive noise cancellation using the short time Fourier transform algorithms, articulation rate change techniques, and a description of an experiment which demonstrated that the spectral subtraction noise suppression algorithm can improve the intelligibility of 2400 bps, LPC 10 coded, helicopter speech by 10.6 point.
Spot measurement of heart rate based on morphology of PhotoPlethysmoGraphic (PPG) signals.
Madhan Mohan, P; Nagarajan, V; Vignesh, J C
2017-02-01
Due to increasing health consciousness among people, it is imperative to have low-cost health care devices to measure the vital parameters like heart rate and arterial oxygen saturation (SpO 2 ). In this paper, an efficient heart rate monitoring algorithm based on the morphology of photoplethysmography (PPG) signals to measure the spot heart rate (HR) and its real-time implementation is proposed. The algorithm does pre-processing and detects the onsets and systolic peaks of the PPG signal to estimate the heart rate of the subject. Since the algorithm is based on the morphology of the signal, it works well when the subject is not moving, which is a typical test case. So, this algorithm is developed mainly to measure the heart rate at on-demand applications. Real-time experimental results indicate the heart rate accuracy of 99.5%, mean absolute percentage error (MAPE) of 1.65%, mean absolute error (MAE) of 1.18 BPM and reference closeness factor (RCF) of 0.988. The results further show that the average response time of the algorithm to give the spot HR is 6.85 s, so that the users need not wait longer to see their HR. The hardware implementation results show that the algorithm only requires 18 KBytes of total memory and runs at high speed with 0.85 MIPS. So, this algorithm can be targeted to low-cost embedded platforms.
MotieGhader, Habib; Gharaghani, Sajjad; Masoudi-Sobhanzadeh, Yosef; Masoudi-Nejad, Ali
2017-01-01
Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as GA, PSO, ACO and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR feature selection are proposed. SGALA algorithm uses advantages of Genetic algorithm and Learning Automata sequentially and the MGALA algorithm uses advantages of Genetic Algorithm and Learning Automata simultaneously. We applied our proposed algorithms to select the minimum possible number of features from three different datasets and also we observed that the MGALA and SGALA algorithms had the best outcome independently and in average compared to other feature selection algorithms. Through comparison of our proposed algorithms, we deduced that the rate of convergence to optimal result in MGALA and SGALA algorithms were better than the rate of GA, ACO, PSO and LA algorithms. In the end, the results of GA, ACO, PSO, LA, SGALA, and MGALA algorithms were applied as the input of LS-SVR model and the results from LS-SVR models showed that the LS-SVR model had more predictive ability with the input from SGALA and MGALA algorithms than the input from all other mentioned algorithms. Therefore, the results have corroborated that not only is the predictive efficiency of proposed algorithms better, but their rate of convergence is also superior to the all other mentioned algorithms. PMID:28979308
MotieGhader, Habib; Gharaghani, Sajjad; Masoudi-Sobhanzadeh, Yosef; Masoudi-Nejad, Ali
2017-01-01
Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as GA, PSO, ACO and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR feature selection are proposed. SGALA algorithm uses advantages of Genetic algorithm and Learning Automata sequentially and the MGALA algorithm uses advantages of Genetic Algorithm and Learning Automata simultaneously. We applied our proposed algorithms to select the minimum possible number of features from three different datasets and also we observed that the MGALA and SGALA algorithms had the best outcome independently and in average compared to other feature selection algorithms. Through comparison of our proposed algorithms, we deduced that the rate of convergence to optimal result in MGALA and SGALA algorithms were better than the rate of GA, ACO, PSO and LA algorithms. In the end, the results of GA, ACO, PSO, LA, SGALA, and MGALA algorithms were applied as the input of LS-SVR model and the results from LS-SVR models showed that the LS-SVR model had more predictive ability with the input from SGALA and MGALA algorithms than the input from all other mentioned algorithms. Therefore, the results have corroborated that not only is the predictive efficiency of proposed algorithms better, but their rate of convergence is also superior to the all other mentioned algorithms.
Yang, Lei; Yang, Ming; Xu, Zihao; Zhuang, Xiaoqi; Wang, Wei; Zhang, Haibo; Han, Lu; Xu, Liang
2014-10-01
The purpose of this paper is to report the research and design of control system of magnetic coupling centrifugal blood pump in our laboratory, and to briefly describe the structure of the magnetic coupling centrifugal blood pump and principles of the body circulation model. The performance of blood pump is not only related to materials and structure, but also depends on the control algorithm. We studied the algorithm about motor current double-loop control for brushless DC motor. In order to make the algorithm adjust parameter change in different situations, we used the self-tuning fuzzy PI control algorithm and gave the details about how to design fuzzy rules. We mainly used Matlab Simulink to simulate the motor control system to test the performance of algorithm, and briefly introduced how to implement these algorithms in hardware system. Finally, by building the platform and conducting experiments, we proved that self-tuning fuzzy PI control algorithm could greatly improve both dynamic and static performance of blood pump and make the motor speed and the blood pump flow stable and adjustable.
Adaptive control in the presence of unmodeled dynamics. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rohrs, C. E.
1982-01-01
Stability and robustness properties of a wide class of adaptive control algorithms in the presence of unmodeled dynamics and output disturbances were investigated. The class of adaptive algorithms considered are those commonly referred to as model reference adaptive control algorithms, self-tuning controllers, and dead beat adaptive controllers, developed for both continuous-time systems and discrete-time systems. A unified analytical approach was developed to examine the class of existing adaptive algorithms. It was discovered that all existing algorithms contain an infinite gain operator in the dynamic system that defines command reference errors and parameter errors; it is argued that such an infinite gain operator appears to be generic to all adaptive algorithms, whether they exhibit explicit or implicit parameter identification. It is concluded that none of the adaptive algorithms considered can be used with confidence in a practical control system design, because instability will set in with a high probability.
A nonlinear optimal control approach for chaotic finance dynamics
NASA Astrophysics Data System (ADS)
Rigatos, G.; Siano, P.; Loia, V.; Tommasetti, A.; Troisi, O.
2017-11-01
A new nonlinear optimal control approach is proposed for stabilization of the dynamics of a chaotic finance model. The dynamic model of the financial system, which expresses interaction between the interest rate, the investment demand, the price exponent and the profit margin, undergoes approximate linearization round local operating points. These local equilibria are defined at each iteration of the control algorithm and consist of the present value of the systems state vector and the last value of the control inputs vector that was exerted on it. The approximate linearization makes use of Taylor series expansion and of the computation of the associated Jacobian matrices. The truncation of higher order terms in the Taylor series expansion is considered to be a modelling error that is compensated by the robustness of the control loop. As the control algorithm runs, the temporary equilibrium is shifted towards the reference trajectory and finally converges to it. The control method needs to compute an H-infinity feedback control law at each iteration, and requires the repetitive solution of an algebraic Riccati equation. Through Lyapunov stability analysis it is shown that an H-infinity tracking performance criterion holds for the control loop. This implies elevated robustness against model approximations and external perturbations. Moreover, under moderate conditions the global asymptotic stability of the control loop is proven.
An Actor-Critic based controller for glucose regulation in type 1 diabetes.
Daskalaki, Elena; Diem, Peter; Mougiakakou, Stavroula G
2013-02-01
A novel adaptive approach for glucose control in individuals with type 1 diabetes under sensor-augmented pump therapy is proposed. The controller, is based on Actor-Critic (AC) learning and is inspired by the principles of reinforcement learning and optimal control theory. The main characteristics of the proposed controller are (i) simultaneous adjustment of both the insulin basal rate and the bolus dose, (ii) initialization based on clinical procedures, and (iii) real-time personalization. The effectiveness of the proposed algorithm in terms of glycemic control has been investigated in silico in adults, adolescents and children under open-loop and closed-loop approaches, using announced meals with uncertainties in the order of ±25% in the estimation of carbohydrates. The results show that glucose regulation is efficient in all three groups of patients, even with uncertainties in the level of carbohydrates in the meal. The percentages in the A+B zones of the Control Variability Grid Analysis (CVGA) were 100% for adults, and 93% for both adolescents and children. The AC based controller seems to be a promising approach for the automatic adjustment of insulin infusion in order to improve glycemic control. After optimization of the algorithm, the controller will be tested in a clinical trial. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Three-phase Four-leg Inverter LabVIEW FPGA Control Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
In the area of power electronics control, Field Programmable Gate Arrays (FPGAs) have the capability to outperform their Digital Signal Processor (DSP) counterparts due to the FPGA’s ability to implement true parallel processing and therefore facilitate higher switching frequencies, higher control bandwidth, and/or enhanced functionality. National Instruments (NI) has developed two platforms, Compact RIO (cRIO) and Single Board RIO (sbRIO), which combine a real-time processor with an FPGA. The FPGA can be programmed with a subset of the well-known LabVIEW graphical programming language. The use of cRIO and sbRIO for power electronics control has developed over the last few yearsmore » to include control of three-phase inverters. Most three-phase inverter topologies include three switching legs. The addition of a fourth-leg to natively generate the neutral connection allows the inverter to serve single-phase loads in a microgrid or stand-alone power system and to balance the three-phase voltages in the presence of significant load imbalance. However, the control of a four-leg inverter is much more complex. In particular, instead of standard two-dimensional space vector modulation (SVM), the inverter requires three-dimensional space vector modulation (3D-SVM). The candidate software implements complete control algorithms in LabVIEW FPGA for a three-phase four-leg inverter. The software includes feedback control loops, three-dimensional space vector modulation gate-drive algorithms, advanced alarm handling capabilities, contactor control, power measurements, and debugging and tuning tools. The feedback control loops allow inverter operation in AC voltage control, AC current control, or DC bus voltage control modes based on external mode selection by a user or supervisory controller. The software includes the ability to synchronize its AC output to the grid or other voltage-source before connection. The software also includes provisions to allow inverter operation in parallel with other voltage regulating devices on the AC or DC buses. This flexibility allows the Inverter to operate as a stand-alone voltage source, connected to the grid, or in parallel with other controllable voltage sources as part of a microgrid or remote power system. In addition, as the inverter is expected to operate under severe unbalanced conditions, the software includes algorithms to accurately compute real and reactive power for each phase based on definitions provided in the IEEE Standard 1459: IEEE Standard Definitions for the Measurement of Electric Power Quantities Under Sinusoidal, Nonsinusoidal, Balanced, or Unbalanced Conditions. Finally, the software includes code to output analog signals for debugging and for tuning of control loops. The software fits on the Xilinx Virtex V LX110 FPGA embedded in the NI cRIO-9118 FPGA chassis, and with a 40 MHz base clock, supports a modulation update rate of 40 MHz, user-settable switching frequencies and synchronized control loop update rates of tens of kHz, and reference waveform generation, including Phase Lock Loop (PLL), update rate of 100 kHz.« less
NASA Technical Reports Server (NTRS)
Smith, Eric A.; Mugnai, Alberto; Cooper, Harry J.; Tripoli, Gregory J.; Xiang, Xuwu
1992-01-01
The relationship between emerging microwave brightness temperatures (T(B)s) and vertically distributed mixtures of liquid and frozen hydrometeors was investigated, using a cloud-radiation model, in order to establish the framework for a hybrid statistical-physical rainfall retrieval algorithm. Although strong relationships were found between the T(B) values and various rain parameters, these correlations are misleading in that the T(B)s are largely controlled by fluctuations in the ice-particle mixing ratios, which in turn are highly correlated to fluctuations in liquid-particle mixing ratios. However, the empirically based T(B)-rain-rate (T(B)-RR) algorithms can still be used as tools for estimating precipitation if the hydrometeor profiles used for T(B)-RR algorithms are not specified in an ad hoc fashion.
A real-time guidance algorithm for aerospace plane optimal ascent to low earth orbit
NASA Technical Reports Server (NTRS)
Calise, A. J.; Flandro, G. A.; Corban, J. E.
1989-01-01
Problems of onboard trajectory optimization and synthesis of suitable guidance laws for ascent to low Earth orbit of an air-breathing, single-stage-to-orbit vehicle are addressed. A multimode propulsion system is assumed which incorporates turbojet, ramjet, Scramjet, and rocket engines. An algorithm for generating fuel-optimal climb profiles is presented. This algorithm results from the application of the minimum principle to a low-order dynamic model that includes angle-of-attack effects and the normal component of thrust. Maximum dynamic pressure and maximum aerodynamic heating rate constraints are considered. Switching conditions are derived which, under appropriate assumptions, govern optimal transition from one propulsion mode to another. A nonlinear transformation technique is employed to derived a feedback controller for tracking the computed trajectory. Numerical results illustrate the nature of the resulting fuel-optimal climb paths.
Variable screening via quantile partial correlation
Ma, Shujie; Tsai, Chih-Ling
2016-01-01
In quantile linear regression with ultra-high dimensional data, we propose an algorithm for screening all candidate variables and subsequently selecting relevant predictors. Specifically, we first employ quantile partial correlation for screening, and then we apply the extended Bayesian information criterion (EBIC) for best subset selection. Our proposed method can successfully select predictors when the variables are highly correlated, and it can also identify variables that make a contribution to the conditional quantiles but are marginally uncorrelated or weakly correlated with the response. Theoretical results show that the proposed algorithm can yield the sure screening set. By controlling the false selection rate, model selection consistency can be achieved theoretically. In practice, we proposed using EBIC for best subset selection so that the resulting model is screening consistent. Simulation studies demonstrate that the proposed algorithm performs well, and an empirical example is presented. PMID:28943683
On adaptive learning rate that guarantees convergence in feedforward networks.
Behera, Laxmidhar; Kumar, Swagat; Patnaik, Awhan
2006-09-01
This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.
Novel bio-inspired smart control for hazard mitigation of civil structures
NASA Astrophysics Data System (ADS)
Kim, Yeesock; Kim, Changwon; Langari, Reza
2010-11-01
In this paper, a new bio-inspired controller is proposed for vibration mitigation of smart structures subjected to ground disturbances (i.e. earthquakes). The control system is developed through the integration of a brain emotional learning (BEL) algorithm with a proportional-integral-derivative (PID) controller and a semiactive inversion (Inv) algorithm. The BEL algorithm is based on the neurologically inspired computational model of the amygdala and the orbitofrontal cortex. To demonstrate the effectiveness of the proposed hybrid BEL-PID-Inv control algorithm, a seismically excited building structure equipped with a magnetorheological (MR) damper is investigated. The performance of the proposed hybrid BEL-PID-Inv control algorithm is compared with that of passive, PID, linear quadratic Gaussian (LQG), and BEL control systems. In the simulation, the robustness of the hybrid BEL-PID-Inv control algorithm in the presence of modeling uncertainties as well as external disturbances is investigated. It is shown that the proposed hybrid BEL-PID-Inv control algorithm is effective in improving the dynamic responses of seismically excited building structure-MR damper systems.
NASA Astrophysics Data System (ADS)
Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling
2017-09-01
In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters
FIVQ algorithm for interference hyper-spectral image compression
NASA Astrophysics Data System (ADS)
Wen, Jia; Ma, Caiwen; Zhao, Junsuo
2014-07-01
Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.
Tengku Hashim, Tengku Juhana; Mohamed, Azah
2017-01-01
The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate. PMID:28991919
Tengku Hashim, Tengku Juhana; Mohamed, Azah
2017-01-01
The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate.
NASA Astrophysics Data System (ADS)
Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing
2015-08-01
Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).
Active flutter suppression using optical output feedback digital controllers
NASA Technical Reports Server (NTRS)
1982-01-01
A method for synthesizing digital active flutter suppression controllers using the concept of optimal output feedback is presented. A convergent algorithm is employed to determine constrained control law parameters that minimize an infinite time discrete quadratic performance index. Low order compensator dynamics are included in the control law and the compensator parameters are computed along with the output feedback gain as part of the optimization process. An input noise adjustment procedure is used to improve the stability margins of the digital active flutter controller. Sample rate variation, prefilter pole variation, control structure variation and gain scheduling are discussed. A digital control law which accommodates computation delay can stabilize the wing with reasonable rms performance and adequate stability margins.
Optimal Dynamic Strategies for Index Tracking and Algorithmic Trading
NASA Astrophysics Data System (ADS)
Ward, Brian
In this thesis we study dynamic strategies for index tracking and algorithmic trading. Tracking problems have become ever more important in Financial Engineering as investors seek to precisely control their portfolio risks and exposures over different time horizons. This thesis analyzes various tracking problems and elucidates the tracking errors and strategies one can employ to minimize those errors and maximize profit. In Chapters 2 and 3, we study the empirical tracking properties of exchange traded funds (ETFs), leveraged ETFs (LETFs), and futures products related to spot gold and the Chicago Board Option Exchange (CBOE) Volatility Index (VIX), respectively. These two markets provide interesting and differing examples for understanding index tracking. We find that static strategies work well in the nonleveraged case for gold, but fail to track well in the corresponding leveraged case. For VIX, tracking via neither ETFs, nor futures\\ portfolios succeeds, even in the nonleveraged case. This motivates the need for dynamic strategies, some of which we construct in these two chapters and further expand on in Chapter 4. There, we analyze a framework for index tracking and risk exposure control through financial derivatives. We derive a tracking condition that restricts our exposure choices and also define a slippage process that characterizes the deviations from the index over longer horizons. The framework is applied to a number of models, for example, Black Scholes model and Heston model for equity index tracking, as well as the Square Root (SQR) model and the Concatenated Square Root (CSQR) model for VIX tracking. By specifying how each of these models fall into our framework, we are able to understand the tracking errors in each of these models. Finally, Chapter 5 analyzes a tracking problem of a different kind that arises in algorithmic trading: schedule following for optimal execution. We formulate and solve a stochastic control problem to obtain the optimal trading rates using both market and limit orders. There is a quadratic terminal penalty to ensure complete liquidation as well as a trade speed limiter and trader director to provide better control on the trading rates. The latter two penalties allow the trader to tailor the magnitude and sign (respectively) of the optimal trading rates. We demonstrate the applicability of the model to following a benchmark schedule. In addition, we identify conditions on the model parameters to ensure optimality of the controls and finiteness of the associated value functions. Throughout the chapter, numerical simulations are provided to demonstrate the properties of the optimal trading rates.
NASA Astrophysics Data System (ADS)
Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh
2015-11-01
The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.
NASA Astrophysics Data System (ADS)
Apribowo, Chico Hermanu Brillianto; Ibrahim, Muhammad Hamka; Wicaksono, F. X. Rian
2018-02-01
The growing burden of the load and the complexity of the power system has had an impact on the need for optimization of power system operation. Optimal power flow (OPF) with optimal location placement and rating of thyristor controlled series capacitor (TCSC) is an effective solution used to determine the economic cost of operating the plant and regulate the power flow in the power system. The purpose of this study is to minimize the total cost of generation by placing the location and the optimal rating of TCSC using genetic algorithm-design of experiment techniques (GA-DOE). Simulation on Java-Bali system 500 kV with the amount of TCSC used by 5 compensator, the proposed method can reduce the generation cost by 0.89% compared to OPF without using TCSC.
Development of an Optimal Controller and Validation Test Stand for Fuel Efficient Engine Operation
NASA Astrophysics Data System (ADS)
Rehn, Jack G., III
There are numerous motivations for improvements in automotive fuel efficiency. As concerns over the environment grow at a rate unmatched by hybrid and electric automotive technologies, the need for reductions in fuel consumed by current road vehicles has never been more present. Studies have shown that a major cause of poor fuel consumption in automobiles is improper driving behavior, which cannot be mitigated by purely technological means. The emergence of autonomous driving technologies has provided an opportunity to alleviate this inefficiency by removing the necessity of a driver. Before autonomous technology can be relied upon to reduce gasoline consumption on a large scale, robust programming strategies must be designed and tested. The goal of this thesis work was to design and deploy an autonomous control algorithm to navigate a four cylinder, gasoline combustion engine through a series of changing load profiles in a manner that prioritizes fuel efficiency. The experimental setup is analogous to a passenger vehicle driving over hilly terrain at highway speeds. The proposed approach accomplishes this using a model-predictive, real-time optimization algorithm that was calibrated to the engine. Performance of the optimal control algorithm was tested on the engine against contemporary cruise control. Results indicate that the "efficient'' strategy achieved one to two percent reductions in total fuel consumed for all load profiles tested. The consumption data gathered also suggests that further improvements could be realized on a different subject engine and using extended models and a slightly modified optimal control approach.
Wei, Qinglai; Liu, Derong; Lin, Qiao
In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.
Design and implementation of co-operative control strategy for hybrid AC/DC microgrids
NASA Astrophysics Data System (ADS)
Mahmud, Rasel
This thesis is mainly divided in two major sections: 1) Modeling and control of AC microgrid, DC microgrid, Hybrid AC/DC microgrid using distributed co-operative control, and 2) Development of a four bus laboratory prototype of an AC microgrid system. At first, a distributed cooperative control (DCC) for a DC microgrid considering the state-of-charge (SoC) of the batteries in a typical plug-in-electric-vehicle (PEV) is developed. In DC microgrids, this methodology is developed to assist the load sharing amongst the distributed generation units (DGs), according to their ratings with improved voltage regulation. Subsequently, a DCC based control algorithm for AC microgrid is also investigated to improve the performance of AC microgrid in terms of power sharing among the DGs, voltage regulation and frequency deviation. The results validate the advantages of the proposed methodology as compared to traditional droop control of AC microgrid. The DCC-based control methodology for AC microgrid and DC microgrid are further expanded to develop a DCC-based power management algorithm for hybrid AC/DC microgrid. The developed algorithm for hybrid microgrid controls the power flow through the interfacing converter (IC) between the AC and DC microgrids. This will facilitate the power sharing between the DGs according to their power ratings. Moreover, it enables the fixed scheduled power delivery at different operating conditions, while maintaining good voltage regulation and improved frequency profile. The second section provides a detailed explanation and step-by-step design and development of an AC/DC microgrid testbed. Controllers for the three-phase inverters are designed and tested on different generation units along with their corresponding inductor-capacitor-inductor (LCL) filters to eliminate the switching frequency harmonics. Electric power distribution line models are developed to form the microgrid network topology. Voltage and current sensors are placed in the proper positions to achieve a full visibility over the microgrid. A running average filter (RAF) based enhanced phase-locked-loop (EPLL) is designed and implemented to extract frequency and phase angle information. A PLL-based synchronizing scheme is also developed to synchronize the DGs to the microgrid. The developed laboratory prototype runs on dSpace platform for real time data acquisition, communication and controller implementation.
Martsolf, Grant R; Barrett, Marguerite L; Weiss, Audrey J; Kandrack, Ryan; Washington, Raynard; Steiner, Claudia A; Mehrotra, Ateev; SooHoo, Nelson F; Coffey, Rosanna
2016-08-17
Readmission rates following total hip arthroplasty (THA) and total knee arthroplasty (TKA) are increasingly used to measure hospital performance. Readmission rates that are not adjusted for race/ethnicity and socioeconomic status, patient risk factors beyond a hospital's control, may not accurately reflect a hospital's performance. In this study, we examined the extent to which risk-adjusting for race/ethnicity and socioeconomic status affected hospital performance in terms of readmission rates following THA and TKA. We calculated 2 sets of risk-adjusted readmission rates by (1) using the Centers for Medicare & Medicaid Services standard risk-adjustment algorithm that incorporates patient age, sex, comorbidities, and hospital effects and (2) adding race/ethnicity and socioeconomic status to the model. Using data from the Healthcare Cost and Utilization Project, 2011 State Inpatient Databases, we compared the relative performances of 1,194 hospitals across the 2 methods. Addition of race/ethnicity and socioeconomic status to the risk-adjustment algorithm resulted in (1) little or no change in the risk-adjusted readmission rates at nearly all hospitals; (2) no change in the designation of the readmission rate as better, worse, or not different from the population mean at >99% of the hospitals; and (3) no change in the excess readmission ratio at >97% of the hospitals. Inclusion of race/ethnicity and socioeconomic status in the risk-adjustment algorithm led to a relative-performance change in readmission rates following THA and TKA at <3% of the hospitals. We believe that policymakers and payers should consider this result when deciding whether to include race/ethnicity and socioeconomic status in risk-adjusted THA and TKA readmission rates used for hospital accountability, payment, and public reporting. Prognostic Level III. See instructions for Authors for a complete description of levels of evidence. Copyright © 2016 by The Journal of Bone and Joint Surgery, Incorporated.
Waewsak, Chaiwat; Nopharatana, Annop; Chaiprasert, Pawinee
2010-01-01
Based on the developed neural-fuzzy control system for anaerobic hybrid reactor (AHR) in wastewater treatment and biogas production, the neural network with backpropagation algorithm for prediction of the variables pH, alkalinity (Alk) and total volatile acids (TVA) at present day time t was used as input data for the fuzzy logic to calculate the influent feed flow rate that was applied to control and monitor the process response at different operations in the initial, overload influent feeding and the recovery phases. In all three phases, this neural-fuzzy control system showed great potential to control AHR in high stability and performance and quick response. Although in the overloading operation phase II with two fold calculating influent flow rate together with a two fold organic loading rate (OLR), this control system had rapid response and was sensitive to the intended overload. When the influent feeding rate was followed by the calculation of control system in the initial operation phase I and the recovery operation phase III, it was found that the neural-fuzzy control system application was capable of controlling the AHR in a good manner with the pH close to 7, TVA/Alk < 0.4 and COD removal > 80% with biogas and methane yields at 0.45 and 0.30 m3/kg COD removed.
Automatic control algorithm effects on energy production
NASA Technical Reports Server (NTRS)
Mcnerney, G. M.
1981-01-01
A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.
Adaptive Control Strategies for Flexible Robotic Arm
NASA Technical Reports Server (NTRS)
Bialasiewicz, Jan T.
1996-01-01
The control problem of a flexible robotic arm has been investigated. The control strategies that have been developed have a wide application in approaching the general control problem of flexible space structures. The following control strategies have been developed and evaluated: neural self-tuning control algorithm, neural-network-based fuzzy logic control algorithm, and adaptive pole assignment algorithm. All of the above algorithms have been tested through computer simulation. In addition, the hardware implementation of a computer control system that controls the tip position of a flexible arm clamped on a rigid hub mounted directly on the vertical shaft of a dc motor, has been developed. An adaptive pole assignment algorithm has been applied to suppress vibrations of the described physical model of flexible robotic arm and has been successfully tested using this testbed.
NASA Technical Reports Server (NTRS)
Jacklin, Stephen; Schumann, Johann; Gupta, Pramod; Richard, Michael; Guenther, Kurt; Soares, Fola
2005-01-01
Adaptive control technologies that incorporate learning algorithms have been proposed to enable automatic flight control and vehicle recovery, autonomous flight, and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments. In order for adaptive control systems to be used in safety-critical aerospace applications, they must be proven to be highly safe and reliable. Rigorous methods for adaptive software verification and validation must be developed to ensure that control system software failures will not occur. Of central importance in this regard is the need to establish reliable methods that guarantee convergent learning, rapid convergence (learning) rate, and algorithm stability. This paper presents the major problems of adaptive control systems that use learning to improve performance. The paper then presents the major procedures and tools presently developed or currently being developed to enable the verification, validation, and ultimate certification of these adaptive control systems. These technologies include the application of automated program analysis methods, techniques to improve the learning process, analytical methods to verify stability, methods to automatically synthesize code, simulation and test methods, and tools to provide on-line software assurance.
Fully decentralized estimation and control for a modular wheeled mobile robot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mutambara, A.G.O.; Durrant-Whyte, H.F.
2000-06-01
In this paper, the problem of fully decentralized data fusion and control for a modular wheeled mobile robot (WMR) is addressed. This is a vehicle system with nonlinear kinematics, distributed multiple sensors, and nonlinear sensor models. The problem is solved by applying fully decentralized estimation and control algorithms based on the extended information filter. This is achieved by deriving a modular, decentralized kinematic model by using plane motion kinematics to obtain the forward and inverse kinematics for a generalized simple wheeled vehicle. This model is then used in the decentralized estimation and control algorithms. WMR estimation and control is thusmore » obtained locally using reduced order models with reduced communication of information between nodes is carried out after every measurement (full rate communication), the estimates and control signals obtained at each node are equivalent to those obtained by a corresponding centralized system. Transputer architecture is used as the basis for hardware and software design as it supports the extensive communication and concurrency requirements that characterize modular and decentralized systems. The advantages of a modular WMR vehicle include scalability, application flexibility, low prototyping costs, and high reliability.« less
Transportation using spinning tethers with emphasis on phasing and plane change
NASA Technical Reports Server (NTRS)
Henderson, David G.
1989-01-01
This paper studies the potential uses of spinning tethers as components in a transportation system. Additional degrees of freedom in the selection of transfer orbits as well as phasing control are introduced by allowing both the spin rate of the tethers to be controllable and by allowing the ejection and capture points to be anywhere along the tether length. Equations are derived for the phasing of the planar transfer problem. A construction algorithm for nonplanar transfers is developed and nonplanar phasing conditions are examined.
Hou, Runmin; Wang, Li; Gao, Qiang; Hou, Yuanglong; Wang, Chao
2017-09-01
This paper proposes a novel indirect adaptive fuzzy wavelet neural network (IAFWNN) to control the nonlinearity, wide variations in loads, time-variation and uncertain disturbance of the ac servo system. In the proposed approach, the self-recurrent wavelet neural network (SRWNN) is employed to construct an adaptive self-recurrent consequent part for each fuzzy rule of TSK fuzzy model. For the IAFWNN controller, the online learning algorithm is based on back propagation (BP) algorithm. Moreover, an improved particle swarm optimization (IPSO) is used to adapt the learning rate. The aid of an adaptive SRWNN identifier offers the real-time gradient information to the adaptive fuzzy wavelet neural controller to overcome the impact of parameter variations, load disturbances and other uncertainties effectively, and has a good dynamic. The asymptotical stability of the system is guaranteed by using the Lyapunov method. The result of the simulation and the prototype test prove that the proposed are effective and suitable. Copyright © 2017. Published by Elsevier Ltd.
A novel teaching system for industrial robots.
Lin, Hsien-I; Lin, Yu-Hsiang
2014-03-27
The most important tool for controlling an industrial robotic arm is a teach pendant, which controls the robotic arm movement in work spaces and accomplishes teaching tasks. A good teaching tool should be easy to operate and can complete teaching tasks rapidly and effortlessly. In this study, a new teaching system is proposed for enabling users to operate robotic arms and accomplish teaching tasks easily. The proposed teaching system consists of the teach pen, optical markers on the pen, a motion capture system, and the pen tip estimation algorithm. With the marker positions captured by the motion capture system, the pose of the teach pen is accurately calculated by the pen tip algorithm and used to control the robot tool frame. In addition, Fitts' Law is adopted to verify the usefulness of this new system, and the results show that the system provides high accuracy, excellent operation performance, and a stable error rate. In addition, the system maintains superior performance, even when users work on platforms with different inclination angles.
A Novel Teaching System for Industrial Robots
Lin, Hsien-I; Lin, Yu-Hsiang
2014-01-01
The most important tool for controlling an industrial robotic arm is a teach pendant, which controls the robotic arm movement in work spaces and accomplishes teaching tasks. A good teaching tool should be easy to operate and can complete teaching tasks rapidly and effortlessly. In this study, a new teaching system is proposed for enabling users to operate robotic arms and accomplish teaching tasks easily. The proposed teaching system consists of the teach pen, optical markers on the pen, a motion capture system, and the pen tip estimation algorithm. With the marker positions captured by the motion capture system, the pose of the teach pen is accurately calculated by the pen tip algorithm and used to control the robot tool frame. In addition, Fitts' Law is adopted to verify the usefulness of this new system, and the results show that the system provides high accuracy, excellent operation performance, and a stable error rate. In addition, the system maintains superior performance, even when users work on platforms with different inclination angles. PMID:24681669
NASA Astrophysics Data System (ADS)
Wen, Xianfei; Enqvist, Andreas
2017-09-01
Cs2LiYCl6:Ce3+ (CLYC) detectors have demonstrated the capability to simultaneously detect γ-rays and thermal and fast neutrons with medium energy resolution, reasonable detection efficiency, and substantially high pulse shape discrimination performance. A disadvantage of CLYC detectors is the long scintillation decay times, which causes pulse pile-up at moderate input count rate. Pulse processing algorithms were developed based on triangular and trapezoidal filters to discriminate between neutrons and γ-rays at high count rate. The algorithms were first tested using low-rate data. They exhibit a pulse-shape discrimination performance comparable to that of the charge comparison method, at low rate. Then, they were evaluated at high count rate. Neutrons and γ-rays were adequately identified with high throughput at rates of up to 375 kcps. The algorithm developed using the triangular filter exhibits discrimination capability marginally higher than that of the trapezoidal filter based algorithm irrespective of low or high rate. The algorithms exhibit low computational complexity and are executable on an FPGA in real-time. They are also suitable for application to other radiation detectors whose pulses are piled-up at high rate owing to long scintillation decay times.
A constrained joint source/channel coder design and vector quantization of nonstationary sources
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.
1993-01-01
The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.
Adaptive intercolor error prediction coder for lossless color (rgb) picutre compression
NASA Astrophysics Data System (ADS)
Mann, Y.; Peretz, Y.; Mitchell, Harvey B.
2001-09-01
Most of the current lossless compression algorithms, including the new international baseline JPEG-LS algorithm, do not exploit the interspectral correlations that exist between the color planes in an input color picture. To improve the compression performance (i.e., lower the bit rate) it is necessary to exploit these correlations. A major concern is to find efficient methods for exploiting the correlations that, at the same time, are compatible with and can be incorporated into the JPEG-LS algorithm. One such algorithm is the method of intercolor error prediction (IEP), which when used with the JPEG-LS algorithm, results on average in a reduction of 8% in the overall bit rate. We show how the IEP algorithm can be simply modified and that it nearly doubles the size of the reduction in bit rate to 15%.
Model reference adaptive control of robots
NASA Technical Reports Server (NTRS)
Steinvorth, Rodrigo
1991-01-01
This project presents the results of controlling two types of robots using new Command Generator Tracker (CGT) based Direct Model Reference Adaptive Control (MRAC) algorithms. Two mathematical models were used to represent a single-link, flexible joint arm and a Unimation PUMA 560 arm; and these were then controlled in simulation using different MRAC algorithms. Special attention was given to the performance of the algorithms in the presence of sudden changes in the robot load. Previously used CGT based MRAC algorithms had several problems. The original algorithm that was developed guaranteed asymptotic stability only for almost strictly positive real (ASPR) plants. This condition is very restrictive, since most systems do not satisfy this assumption. Further developments to the algorithm led to an expansion of the number of plants that could be controlled, however, a steady state error was introduced in the response. These problems led to the introduction of some modifications to the algorithms so that they would be able to control a wider class of plants and at the same time would asymptotically track the reference model. This project presents the development of two algorithms that achieve the desired results and simulates the control of the two robots mentioned before. The results of the simulations are satisfactory and show that the problems stated above have been corrected in the new algorithms. In addition, the responses obtained show that the adaptively controlled processes are resistant to sudden changes in the load.
Cost-effective analysis of different algorithms for the diagnosis of hepatitis C virus infection.
Barreto, A M E C; Takei, K; E C, Sabino; Bellesa, M A O; Salles, N A; Barreto, C C; Nishiya, A S; Chamone, D F
2008-02-01
We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV) infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co) ratio of ELISA anti-HCV samples that show s/co ratio > or =95% concordance with immunoblot (IB) positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm) which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79). The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.
Design of a multi-arm randomized clinical trial with no control arm.
Magaret, Amalia; Angus, Derek C; Adhikari, Neill K J; Banura, Patrick; Kissoon, Niranjan; Lawler, James V; Jacob, Shevin T
2016-01-01
Clinical trial designs that include multiple treatments are currently limited to those that perform pairwise comparisons of each investigational treatment to a single control. However, there are settings, such as the recent Ebola outbreak, in which no treatment has been demonstrated to be effective; and therefore, no standard of care exists which would serve as an appropriate control. For illustrative purposes, we focused on the care of patients presenting in austere settings with critically ill 'sepsis-like' syndromes. Our approach involves a novel algorithm for comparing mortality among arms without requiring a single fixed control. The algorithm allows poorly-performing arms to be dropped during interim analyses. Consequently, the study may be completed earlier than planned. We used simulation to determine operating characteristics for the trial and to estimate the required sample size. We present a potential study design targeting a minimal effect size of a 23% relative reduction in mortality between any pair of arms. Using estimated power and spurious significance rates from the simulated scenarios, we show that such a trial would require 2550 participants. Over a range of scenarios, our study has 80 to 99% power to select the optimal treatment. Using a fixed control design, if the control arm is least efficacious, 640 subjects would be enrolled into the least efficacious arm, while our algorithm would enroll between 170 and 430. This simulation method can be easily extended to other settings or other binary outcomes. Early dropping of arms is efficient and ethical when conducting clinical trials with multiple arms. Copyright © 2015 Elsevier Inc. All rights reserved.
Comparison of photo-matching algorithms commonly used for photographic capture-recapture studies.
Matthé, Maximilian; Sannolo, Marco; Winiarski, Kristopher; Spitzen-van der Sluijs, Annemarieke; Goedbloed, Daniel; Steinfartz, Sebastian; Stachow, Ulrich
2017-08-01
Photographic capture-recapture is a valuable tool for obtaining demographic information on wildlife populations due to its noninvasive nature and cost-effectiveness. Recently, several computer-aided photo-matching algorithms have been developed to more efficiently match images of unique individuals in databases with thousands of images. However, the identification accuracy of these algorithms can severely bias estimates of vital rates and population size. Therefore, it is important to understand the performance and limitations of state-of-the-art photo-matching algorithms prior to implementation in capture-recapture studies involving possibly thousands of images. Here, we compared the performance of four photo-matching algorithms; Wild-ID, I3S Pattern+, APHIS, and AmphIdent using multiple amphibian databases of varying image quality. We measured the performance of each algorithm and evaluated the performance in relation to database size and the number of matching images in the database. We found that algorithm performance differed greatly by algorithm and image database, with recognition rates ranging from 100% to 22.6% when limiting the review to the 10 highest ranking images. We found that recognition rate degraded marginally with increased database size and could be improved considerably with a higher number of matching images in the database. In our study, the pixel-based algorithm of AmphIdent exhibited superior recognition rates compared to the other approaches. We recommend carefully evaluating algorithm performance prior to using it to match a complete database. By choosing a suitable matching algorithm, databases of sizes that are unfeasible to match "by eye" can be easily translated to accurate individual capture histories necessary for robust demographic estimates.
The combined control algorithm for large-angle maneuver of HITSAT-1 small satellite
NASA Astrophysics Data System (ADS)
Zhaowei, Sun; Yunhai, Geng; Guodong, Xu; Ping, He
2004-04-01
The HITSAT-1 is the first small satellite developed by Harbin Institute of Technology (HIT) whose mission objective is to test several pivotal techniques. The large angle maneuver control is one of the pivotal techniques of HITSAT-1 and the instantaneous Eulerian axis control algorithm (IEACA) has been applied. Because of using the reaction wheels and magnetorquer as the control actuators, the combined control algorithm has been adopted during the large-angle maneuver course. The computer simulation based on the MATRIX×6.0 software has finished and the results indicated that the combined control algorithm reduced the reaction wheel speeds obviously, and the IEACA algorithm has the advantages of simplicity and efficiency.
Joint source-channel coding for motion-compensated DCT-based SNR scalable video.
Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K
2002-01-01
In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.
Assessment of various supervised learning algorithms using different performance metrics
NASA Astrophysics Data System (ADS)
Susheel Kumar, S. M.; Laxkar, Deepak; Adhikari, Sourav; Vijayarajan, V.
2017-11-01
Our work brings out comparison based on the performance of supervised machine learning algorithms on a binary classification task. The supervised machine learning algorithms which are taken into consideration in the following work are namely Support Vector Machine(SVM), Decision Tree(DT), K Nearest Neighbour (KNN), Naïve Bayes(NB) and Random Forest(RF). This paper mostly focuses on comparing the performance of above mentioned algorithms on one binary classification task by analysing the Metrics such as Accuracy, F-Measure, G-Measure, Precision, Misclassification Rate, False Positive Rate, True Positive Rate, Specificity, Prevalence.
NASA Astrophysics Data System (ADS)
Joa, Eunhyek; Park, Kwanwoo; Koh, Youngil; Yi, Kyongsu; Kim, Kilsoo
2018-04-01
This paper presents a tyre slip-based integrated chassis control of front/rear traction distribution and four-wheel braking for enhanced performance from moderate driving to limit handling. The proposed algorithm adopted hierarchical structure: supervisor - desired motion tracking controller - optimisation-based control allocation. In the supervisor, by considering transient cornering characteristics, desired vehicle motion is calculated. In the desired motion tracking controller, in order to track desired vehicle motion, virtual control input is determined in the manner of sliding mode control. In the control allocation, virtual control input is allocated to minimise cost function. The cost function consists of two major parts. First part is a slip-based tyre friction utilisation quantification, which does not need a tyre force estimation. Second part is an allocation guideline, which guides optimally allocated inputs to predefined solution. The proposed algorithm has been investigated via simulation from moderate driving to limit handling scenario. Compared to Base and direct yaw moment control system, the proposed algorithm can effectively reduce tyre dissipation energy in the moderate driving situation. Moreover, the proposed algorithm enhances limit handling performance compared to Base and direct yaw moment control system. In addition to comparison with Base and direct yaw moment control, comparison the proposed algorithm with the control algorithm based on the known tyre force information has been conducted. The results show that the performance of the proposed algorithm is similar with that of the control algorithm with the known tyre force information.
Discrete-State Simulated Annealing For Traveling-Wave Tube Slow-Wave Circuit Optimization
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.; Bulson, Brian A.; Kory, Carol L.; Williams, W. Dan (Technical Monitor)
2001-01-01
Algorithms based on the global optimization technique of simulated annealing (SA) have proven useful in designing traveling-wave tube (TWT) slow-wave circuits for high RF power efficiency. The characteristic of SA that enables it to determine a globally optimized solution is its ability to accept non-improving moves in a controlled manner. In the initial stages of the optimization, the algorithm moves freely through configuration space, accepting most of the proposed designs. This freedom of movement allows non-intuitive designs to be explored rather than restricting the optimization to local improvement upon the initial configuration. As the optimization proceeds, the rate of acceptance of non-improving moves is gradually reduced until the algorithm converges to the optimized solution. The rate at which the freedom of movement is decreased is known as the annealing or cooling schedule of the SA algorithm. The main disadvantage of SA is that there is not a rigorous theoretical foundation for determining the parameters of the cooling schedule. The choice of these parameters is highly problem dependent and the designer needs to experiment in order to determine values that will provide a good optimization in a reasonable amount of computational time. This experimentation can absorb a large amount of time especially when the algorithm is being applied to a new type of design. In order to eliminate this disadvantage, a variation of SA known as discrete-state simulated annealing (DSSA), was recently developed. DSSA provides the theoretical foundation for a generic cooling schedule which is problem independent, Results of similar quality to SA can be obtained, but without the extra computational time required to tune the cooling parameters. Two algorithm variations based on DSSA were developed and programmed into a Microsoft Excel spreadsheet graphical user interface (GUI) to the two-dimensional nonlinear multisignal helix traveling-wave amplifier analysis program TWA3. The algorithms were used to optimize the computed RF efficiency of a TWT by determining the phase velocity profile of the slow-wave circuit. The mathematical theory and computational details of the DSSA algorithms will be presented and results will be compared to those obtained with a SA algorithm.
Automated Speech Rate Measurement in Dysarthria
ERIC Educational Resources Information Center
Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc
2015-01-01
Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…
An improved VSS NLMS algorithm for active noise cancellation
NASA Astrophysics Data System (ADS)
Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan
2017-08-01
In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.
Making Advanced Scientific Algorithms and Big Scientific Data Management More Accessible
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkatakrishnan, S. V.; Mohan, K. Aditya; Beattie, Keith
2016-02-14
Synchrotrons such as the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory are known as user facilities. They are sources of extremely bright X-ray beams, and scientists come from all over the world to perform experiments that require these beams. As the complexity of experiments has increased, and the size and rates of data sets has exploded, managing, analyzing and presenting the data collected at synchrotrons has been an increasing challenge. The ALS has partnered with high performance computing, fast networking, and applied mathematics groups to create a"super-facility", giving users simultaneous access to the experimental, computational, and algorithmic resourcesmore » to overcome this challenge. This combination forms an efficient closed loop, where data despite its high rate and volume is transferred and processed, in many cases immediately and automatically, on appropriate compute resources, and results are extracted, visualized, and presented to users or to the experimental control system, both to provide immediate insight and to guide decisions about subsequent experiments during beam-time. In this paper, We will present work done on advanced tomographic reconstruction algorithms to support users of the 3D micron-scale imaging instrument (Beamline 8.3.2, hard X-ray micro-tomography).« less
A high data rate universal lattice decoder on FPGA
NASA Astrophysics Data System (ADS)
Ma, Jing; Huang, Xinming; Kura, Swapna
2005-06-01
This paper presents the architecture design of a high data rate universal lattice decoder for MIMO channels on FPGA platform. A phost strategy based lattice decoding algorithm is modified in this paper to reduce the complexity of the closest lattice point search. The data dependency of the improved algorithm is examined and a parallel and pipeline architecture is developed with the iterative decoding function on FPGA and the division intensive channel matrix preprocessing on DSP. Simulation results demonstrate that the improved lattice decoding algorithm provides better bit error rate and less iteration number compared with the original algorithm. The system prototype of the decoder shows that it supports data rate up to 7Mbit/s on a Virtex2-1000 FPGA, which is about 8 times faster than the original algorithm on FPGA platform and two-orders of magnitude better than its implementation on a DSP platform.
Improving HVAC operational efficiency in small-and medium-size commercial buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Woohyun; Katipamula, Srinivas; Lutes, Robert
Small- and medium-size (<100,000 sf) commercial buildings (SMBs) represent over 95% of the U.S. commercial building stock and consume over 60% of total site energy consumption. Many of these buildings use rudimentary controls that are mostly manual, with limited scheduling capability, no monitoring, or failure management. Therefore, many of these buildings are operated inefficiently and consume excess energy. SMBs typically use packaged rooftop units (RTUs) that are controlled by an individual thermostat. There is increased urgency to improve the operating efficiency of existing commercial building stock in the United States for many reasons, chief among them being to mitigate themore » climate change impacts. Studies have shown that managing set points and schedules of the RTUs will result in up to 20% energy and cost savings. Another problem associated with RTUs is short cycling, when an RTU goes through ON and OFF cycles too frequently. Excessive cycling can lead to excessive wear and to premature failure of the compressor or its components. Also, short cycling can result in a significantly decreased average efficiency (up to 10%), even if there are no physical failures in the equipment. Ensuring correct use of the zone set points and eliminating frequent cycling of RTUs thereby leading to persistent building operations can significantly increase the operational efficiency of the SMBs. A growing trend is to use low-cost control infrastructure that can enable scalable and cost-effective intelligent building operations. The work reported in this paper describes two algorithms for detecting the zone set point temperature and RTU cycling rate that can be deployed on the low-cost infrastructure. These algorithms only require the zone temperature data for detection. The algorithms have been tested and validated using field data from a number of RTUs from six buildings in different climate locations. Overall, the algorithms were successful in detecting the set points and ON/OFF cycles accurately using the peak detection technique. The paper describes the two algorithms, results from testing the algorithms using field data, how the algorithms can be used to improve SMBs efficiency, and presents related conclusions.« less
Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds
NASA Technical Reports Server (NTRS)
Jardin, Matthew R.
2004-01-01
A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air-traffic-control automation, thousands of wind-optimal routes may need to be computed and checked for conflicts in just a few minutes. These factors motivated the need for a more efficient wind-optimal routing algorithm.
Huang, Ting-Shuo; Huang, Shie-Shian; Shyu, Yu-Chiau; Lee, Chun-Hui; Jwo, Shyh-Chuan; Chen, Pei-Jer; Chen, Huang-Yang
2014-01-01
Procalcitonin (PCT)-based algorithms have been used to guide antibiotic therapy in several clinical settings. However, evidence supporting PCT-based algorithms for secondary peritonitis after emergency surgery is scanty. In this study, we aimed to investigate whether a PCT-based algorithm could safely reduce antibiotic exposure in this population. From April 2012 to March 2013, patients that had secondary peritonitis diagnosed at the emergency department and underwent emergency surgery were screened for eligibility. PCT levels were obtained pre-operatively, on post-operative days 1, 3, 5, and 7, and on subsequent days if needed. Antibiotics were discontinued if PCT was <1.0 ng/mL or decreased by 80% versus day 1, with resolution of clinical signs. Primary endpoints were time to discontinuation of intravenous antibiotics for the first episode and adverse events. Historical controls were retrieved for propensity score matching. After matching, 30 patients in the PCT group and 60 in the control were included for analysis. The median duration of antibiotic exposure in PCT group was 3.4 days (interquartile range [IQR] 2.2 days), while 6.1 days (IQR 3.2 days) in control (p < 0.001). The PCT algorithm significantly improves time to antibiotic discontinuation (p < 0.001, log-rank test). The rates of adverse events were comparable between 2 groups. Multivariate-adjusted extended Cox model demonstrated that the PCT-based algorithm was significantly associated with a 87% reduction in hazard of antibiotic exposure within 7 days (hazard ratio [HR] 0.13, 95% CI 0.07-0.21, p < 0.001), and a 68% reduction in hazard after 7 days (adjusted HR 0.32, 95% CI 0.11-0.99, p = 0.047). Advanced age, coexisting pulmonary diseases, and higher severity of illness were significantly associated with longer durations of antibiotic use. The PCT-based algorithm safely reduces antibiotic exposure in this study. Further randomized trials are needed to confirm our findings and incorporate cost-effectiveness analysis. Australian New Zealand Clinical Trials Registry ACTRN12612000601831.
Rule-based fault diagnosis of hall sensors and fault-tolerant control of PMSM
NASA Astrophysics Data System (ADS)
Song, Ziyou; Li, Jianqiu; Ouyang, Minggao; Gu, Jing; Feng, Xuning; Lu, Dongbin
2013-07-01
Hall sensor is widely used for estimating rotor phase of permanent magnet synchronous motor(PMSM). And rotor position is an essential parameter of PMSM control algorithm, hence it is very dangerous if Hall senor faults occur. But there is scarcely any research focusing on fault diagnosis and fault-tolerant control of Hall sensor used in PMSM. From this standpoint, the Hall sensor faults which may occur during the PMSM operating are theoretically analyzed. According to the analysis results, the fault diagnosis algorithm of Hall sensor, which is based on three rules, is proposed to classify the fault phenomena accurately. The rotor phase estimation algorithms, based on one or two Hall sensor(s), are initialized to engender the fault-tolerant control algorithm. The fault diagnosis algorithm can detect 60 Hall fault phenomena in total as well as all detections can be fulfilled in 1/138 rotor rotation period. The fault-tolerant control algorithm can achieve a smooth torque production which means the same control effect as normal control mode (with three Hall sensors). Finally, the PMSM bench test verifies the accuracy and rapidity of fault diagnosis and fault-tolerant control strategies. The fault diagnosis algorithm can detect all Hall sensor faults promptly and fault-tolerant control algorithm allows the PMSM to face failure conditions of one or two Hall sensor(s). In addition, the transitions between health-control and fault-tolerant control conditions are smooth without any additional noise and harshness. Proposed algorithms can deal with the Hall sensor faults of PMSM in real applications, and can be provided to realize the fault diagnosis and fault-tolerant control of PMSM.
Algorithms for output feedback, multiple-model, and decentralized control problems
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The optimal stochastic output feedback, multiple-model, and decentralized control problems with dynamic compensation are formulated and discussed. Algorithms for each problem are presented, and their relationship to a basic output feedback algorithm is discussed. An aircraft control design problem is posed as a combined decentralized, multiple-model, output feedback problem. A control design is obtained using the combined algorithm. An analysis of the design is presented.
A parallel adaptive quantum genetic algorithm for the controllability of arbitrary networks.
Li, Yuhong; Gong, Guanghong; Li, Ni
2018-01-01
In this paper, we propose a novel algorithm-parallel adaptive quantum genetic algorithm-which can rapidly determine the minimum control nodes of arbitrary networks with both control nodes and state nodes. The corresponding network can be fully controlled with the obtained control scheme. We transformed the network controllability issue into a combinational optimization problem based on the Popov-Belevitch-Hautus rank condition. A set of canonical networks and a list of real-world networks were experimented. Comparison results demonstrated that the algorithm was more ideal to optimize the controllability of networks, especially those larger-size networks. We demonstrated subsequently that there were links between the optimal control nodes and some network statistical characteristics. The proposed algorithm provides an effective approach to improve the controllability optimization of large networks or even extra-large networks with hundreds of thousands nodes.
Efficient Actor Recovery Paradigm for Wireless Sensor and Actor Networks
Mahjoub, Reem K.; Elleithy, Khaled
2017-01-01
The actor nodes are the spine of wireless sensor and actor networks (WSANs) that collaborate to perform a specific task in an unverified and uneven environment. Thus, there is a possibility of high failure rate in such unfriendly scenarios due to several factors such as power consumption of devices, electronic circuit failure, software errors in nodes or physical impairment of the actor nodes and inter-actor connectivity problem. Therefore, it is extremely important to discover the failure of a cut-vertex actor and network-disjoint in order to improve the Quality-of-Service (QoS). In this paper, we propose an Efficient Actor Recovery (EAR) paradigm to guarantee the contention-free traffic-forwarding capacity. The EAR paradigm consists of a Node Monitoring and Critical Node Detection (NMCND) algorithm that monitors the activities of the nodes to determine the critical node. In addition, it replaces the critical node with backup node prior to complete node-failure which helps balancing the network performance. The packets are handled using Network Integration and Message Forwarding (NIMF) algorithm that determines the source of forwarding the packets; either from actor or sensor. This decision-making capability of the algorithm controls the packet forwarding rate to maintain the network for a longer time. Furthermore, for handling the proper routing strategy, Priority-Based Routing for Node Failure Avoidance (PRNFA) algorithm is deployed to decide the priority of the packets to be forwarded based on the significance of information available in the packet. To validate the effectiveness of the proposed EAR paradigm, the proposed algorithms were tested using OMNET++ simulation. PMID:28420102
Efficient Actor Recovery Paradigm for Wireless Sensor and Actor Networks.
Mahjoub, Reem K; Elleithy, Khaled
2017-04-14
The actor nodes are the spine of wireless sensor and actor networks (WSANs) that collaborate to perform a specific task in an unverified and uneven environment. Thus, there is a possibility of high failure rate in such unfriendly scenarios due to several factors such as power consumption of devices, electronic circuit failure, software errors in nodes or physical impairment of the actor nodes and inter-actor connectivity problem. Therefore, it is extremely important to discover the failure of a cut-vertex actor and network-disjoint in order to improve the Quality-of-Service (QoS). In this paper, we propose an Efficient Actor Recovery (EAR) paradigm to guarantee the contention-free traffic-forwarding capacity. The EAR paradigm consists of a Node Monitoring and Critical Node Detection (NMCND) algorithm that monitors the activities of the nodes to determine the critical node. In addition, it replaces the critical node with backup node prior to complete node-failure which helps balancing the network performance. The packets are handled using Network Integration and Message Forwarding (NIMF) algorithm that determines the source of forwarding the packets; either from actor or sensor. This decision-making capability of the algorithm controls the packet forwarding rate to maintain the network for a longer time. Furthermore, for handling the proper routing strategy, Priority-Based Routing for Node Failure Avoidance (PRNFA) algorithm is deployed to decide the priority of the packets to be forwarded based on the significance of information available in the packet. To validate the effectiveness of the proposed EAR paradigm, the proposed algorithms were tested using OMNET++ simulation.
Closed-loop insulin delivery during pregnancy complicated by type 1 diabetes.
Murphy, Helen R; Elleri, Daniela; Allen, Janet M; Harris, Julie; Simmons, David; Rayman, Gerry; Temple, Rosemary; Dunger, David B; Haidar, Ahmad; Nodale, Marianna; Wilinska, Malgorzata E; Hovorka, Roman
2011-02-01
This study evaluated closed-loop insulin delivery with a model predictive control (MPC) algorithm during early (12-16 weeks) and late gestation (28-32 weeks) in pregnant women with type 1 diabetes. Ten women with type 1 diabetes (age 31 years, diabetes duration 19 years, BMI 24.1 kg/m(2), booking A1C 6.9%) were studied over 24 h during early (14.8 weeks) and late pregnancy (28.0 weeks). A nurse adjusted the basal insulin infusion rate from continuous glucose measurements (CGM), fed into the MPC algorithm every 15 min. Mean glucose and time spent in target (63-140 mg/dL), hyperglycemic (>140 to ≥ 180 mg/dL), and hypoglycemic (<63 to ≤ 50 mg/dL) were calculated using plasma and sensor glucose measurements. Linear mixed-effects models were used to compare glucose control during early and late gestation. During closed-loop insulin delivery, median (interquartile range) plasma glucose levels were 117 (100.8-154.8) mg/dL in early and 126 (109.8-140.4) mg/dL in late gestation (P = 0.72). The overnight mean (interquartile range) plasma glucose time in target was 84% (50-100%) in early and 100% (94-100%) in late pregnancy (P = 0.09). Overnight mean (interquartile range) time spent hyperglycemic (>140 mg/dL) was 7% (0-40%) in early and 0% (0-6%) in late pregnancy (P = 0.25) and hypoglycemic (<63 mg/dL) was 0% (0-3%) and 0% (0-0%), respectively (P = 0.18). Postprandial glucose control, glucose variability, insulin infusion rates, and CGM sensor accuracy were no different in early or late pregnancy. MPC algorithm performance was maintained throughout pregnancy, suggesting that overnight closed-loop insulin delivery could be used safely during pregnancy. More work is needed to achieve optimal postprandial glucose control.
Closed-Loop Insulin Delivery During Pregnancy Complicated by Type 1 Diabetes
Murphy, Helen R.; Elleri, Daniela; Allen, Janet M.; Harris, Julie; Simmons, David; Rayman, Gerry; Temple, Rosemary; Dunger, David B.; Haidar, Ahmad; Nodale, Marianna; Wilinska, Malgorzata E.; Hovorka, Roman
2011-01-01
OBJECTIVE This study evaluated closed-loop insulin delivery with a model predictive control (MPC) algorithm during early (12–16 weeks) and late gestation (28–32 weeks) in pregnant women with type 1 diabetes. RESEARCH DESIGN AND METHODS Ten women with type 1 diabetes (age 31 years, diabetes duration 19 years, BMI 24.1 kg/m2, booking A1C 6.9%) were studied over 24 h during early (14.8 weeks) and late pregnancy (28.0 weeks). A nurse adjusted the basal insulin infusion rate from continuous glucose measurements (CGM), fed into the MPC algorithm every 15 min. Mean glucose and time spent in target (63–140 mg/dL), hyperglycemic (>140 to ≥180 mg/dL), and hypoglycemic (<63 to ≤50 mg/dL) were calculated using plasma and sensor glucose measurements. Linear mixed-effects models were used to compare glucose control during early and late gestation. RESULTS During closed-loop insulin delivery, median (interquartile range) plasma glucose levels were 117 (100.8–154.8) mg/dL in early and 126 (109.8–140.4) mg/dL in late gestation (P = 0.72). The overnight mean (interquartile range) plasma glucose time in target was 84% (50–100%) in early and 100% (94–100%) in late pregnancy (P = 0.09). Overnight mean (interquartile range) time spent hyperglycemic (>140 mg/dL) was 7% (0–40%) in early and 0% (0–6%) in late pregnancy (P = 0.25) and hypoglycemic (<63 mg/dL) was 0% (0–3%) and 0% (0–0%), respectively (P = 0.18). Postprandial glucose control, glucose variability, insulin infusion rates, and CGM sensor accuracy were no different in early or late pregnancy. CONCLUSIONS MPC algorithm performance was maintained throughout pregnancy, suggesting that overnight closed-loop insulin delivery could be used safely during pregnancy. More work is needed to achieve optimal postprandial glucose control. PMID:21216859
PSO Algorithm for an Optimal Power Controller in a Microgrid
NASA Astrophysics Data System (ADS)
Al-Saedi, W.; Lachowicz, S.; Habibi, D.; Bass, O.
2017-07-01
This paper presents the Particle Swarm Optimization (PSO) algorithm to improve the quality of the power supply in a microgrid. This algorithm is proposed for a real-time selftuning method that used in a power controller for an inverter based Distributed Generation (DG) unit. In such system, the voltage and frequency are the main control objectives, particularly when the microgrid is islanded or during load change. In this work, the PSO algorithm is implemented to find the optimal controller parameters to satisfy the control objectives. The results show high performance of the applied PSO algorithm of regulating the microgrid voltage and frequency.
Chen, Zhe; Purdon, Patrick L.; Brown, Emery N.; Barbieri, Riccardo
2012-01-01
In recent years, time-varying inhomogeneous point process models have been introduced for assessment of instantaneous heartbeat dynamics as well as specific cardiovascular control mechanisms and hemodynamics. Assessment of the model’s statistics is established through the Wiener-Volterra theory and a multivariate autoregressive (AR) structure. A variety of instantaneous cardiovascular metrics, such as heart rate (HR), heart rate variability (HRV), respiratory sinus arrhythmia (RSA), and baroreceptor-cardiac reflex (baroreflex) sensitivity (BRS), are derived within a parametric framework and instantaneously updated with adaptive and local maximum likelihood estimation algorithms. Inclusion of second-order non-linearities, with subsequent bispectral quantification in the frequency domain, further allows for definition of instantaneous metrics of non-linearity. We here present a comprehensive review of the devised methods as applied to experimental recordings from healthy subjects during propofol anesthesia. Collective results reveal interesting dynamic trends across the different pharmacological interventions operated within each anesthesia session, confirming the ability of the algorithm to track important changes in cardiorespiratory elicited interactions, and pointing at our mathematical approach as a promising monitoring tool for an accurate, non-invasive assessment in clinical practice. We also discuss the limitations and other alternative modeling strategies of our point process approach. PMID:22375120
Implementation of Nonlinear Control Laws for an Optical Delay Line
NASA Technical Reports Server (NTRS)
Hench, John J.; Lurie, Boris; Grogan, Robert; Johnson, Richard
2000-01-01
This paper discusses the implementation of a globally stable nonlinear controller algorithm for the Real-Time Interferometer Control System Testbed (RICST) brassboard optical delay line (ODL) developed for the Interferometry Technology Program at the Jet Propulsion Laboratory. The control methodology essentially employs loop shaping to implement linear control laws. while utilizing nonlinear elements as means of ameliorating the effects of actuator saturation in its coarse, main, and vernier stages. The linear controllers were implemented as high-order digital filters and were designed using Bode integral techniques to determine the loop shape. The nonlinear techniques encompass the areas of exact linearization, anti-windup control, nonlinear rate limiting and modal control. Details of the design procedure are given as well as data from the actual mechanism.
Application of gain scheduling to the control of batch bioreactors
NASA Technical Reports Server (NTRS)
Cardello, Ralph; San, Ka-Yiu
1987-01-01
The implementation of control algorithms to batch bioreactors is often complicated by the inherent variations in process dynamics during the course of fermentation. Such a wide operating range may render the performance of fixed gain PID controllers unsatisfactory. In this work, a detailed study on the control of batch fermentation is performed. Furthermore, a simple batch controller design is proposed which incorporates the concept of gain-scheduling, a subclass of adaptive control, with oxygen uptake rate as an auxiliary variable. The control of oxygen tension in the biorector is used as a vehicle to convey the proposed idea, analysis and results. Simulation experiments indicate significant improvement in controller performance can be achieved by the proposed approach even in the presence of measurement noise.
NASA Technical Reports Server (NTRS)
Battiste, Vernol; Lawton, George; Lachter, Joel; Brandt, Summer; Koteskey, Robert; Dao, Arik-Quang; Kraut, Josh; Ligda, Sarah; Johnson, Walter W.
2012-01-01
Managing the interval between arrival aircraft is a major part of the en route and TRACON controller s job. In an effort to reduce controller workload and low altitude vectoring, algorithms have been developed to allow pilots to take responsibility for, achieve and maintain proper spacing. Additionally, algorithms have been developed to create dynamic weather-free arrival routes in the presence of convective weather. In a recent study we examined an algorithm to handle dynamic re-routing in the presence of convective weather and two distinct spacing algorithms. The spacing algorithms originated from different core algorithms; both were enhanced with trajectory intent data for the study. These two algorithms were used simultaneously in a human-in-the-loop (HITL) simulation where pilots performed weather-impacted arrival operations into Louisville International Airport while also performing interval management (IM) on some trials. The controllers retained responsibility for separation and for managing the en route airspace and some trials managing IM. The goal was a stress test of dynamic arrival algorithms with ground and airborne spacing concepts. The flight deck spacing algorithms or controller managed spacing not only had to be robust to the dynamic nature of aircraft re-routing around weather but also had to be compatible with two alternative algorithms for achieving the spacing goal. Flight deck interval management spacing in this simulation provided a clear reduction in controller workload relative to when controllers were responsible for spacing the aircraft. At the same time, spacing was much less variable with the flight deck automated spacing. Even though the approaches taken by the two spacing algorithms to achieve the interval management goals were slightly different they seem to be simpatico in achieving the interval management goal of 130 sec by the TRACON boundary.
Deadbeat Predictive Controllers
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh
1997-01-01
Several new computational algorithms are presented to compute the deadbeat predictive control law. The first algorithm makes use of a multi-step-ahead output prediction to compute the control law without explicitly calculating the controllability matrix. The system identification must be performed first and then the predictive control law is designed. The second algorithm uses the input and output data directly to compute the feedback law. It combines the system identification and the predictive control law into one formulation. The third algorithm uses an observable-canonical form realization to design the predictive controller. The relationship between all three algorithms is established through the use of the state-space representation. All algorithms are applicable to multi-input, multi-output systems with disturbance inputs. In addition to the feedback terms, feed forward terms may also be added for disturbance inputs if they are measurable. Although the feedforward terms do not influence the stability of the closed-loop feedback law, they enhance the performance of the controlled system.
Tire-road friction estimation and traction control strategy for motorized electric vehicle.
Jin, Li-Qiang; Ling, Mingze; Yue, Weiqiang
2017-01-01
In this paper, an optimal longitudinal slip ratio system for real-time identification of electric vehicle (EV) with motored wheels is proposed based on the adhesion between tire and road surface. First and foremost, the optimal longitudinal slip rate torque control can be identified in real time by calculating the derivative and slip rate of the adhesion coefficient. Secondly, the vehicle speed estimation method is also brought. Thirdly, an ideal vehicle simulation model is proposed to verify the algorithm with simulation, and we find that the slip ratio corresponds to the detection of the adhesion limit in real time. Finally, the proposed strategy is applied to traction control system (TCS). The results showed that the method can effectively identify the state of wheel and calculate the optimal slip ratio without wheel speed sensor; in the meantime, it can improve the accelerated stability of electric vehicle with traction control system (TCS).
NASA Technical Reports Server (NTRS)
Comstock, James R., Jr.; Ghatas, Rania W.; Consiglio, Maria C.; Chamberlain, James P.; Hoffler, Keith D.
2016-01-01
This study evaluated the effects of Communications Delays and Winds on Air Traffic Controller ratings of acceptability of horizontal miss distances (HMDs) for encounters between UAS and manned aircraft in a simulation of the Dallas-Ft. Worth East-side airspace. Fourteen encounters per hour were staged in the presence of moderate background traffic. Seven recently retired controllers with experience at DFW served as subjects. Guidance provided to the UAS pilots for maintaining a given HMD was provided by information from self-separation algorithms displayed on the Multi-Aircraft Simulation System. Winds tested did not affect the acceptability ratings. Communications delays tested included 0, 400, 1200, and 1800 msec. For longer communications delays, there were changes in strategy and communications flow that were observed and reported by the controllers. The aim of this work is to provide useful information for guiding future rules and regulations applicable to flying UAS in the NAS.
Tire-road friction estimation and traction control strategy for motorized electric vehicle
Jin, Li-Qiang; Yue, Weiqiang
2017-01-01
In this paper, an optimal longitudinal slip ratio system for real-time identification of electric vehicle (EV) with motored wheels is proposed based on the adhesion between tire and road surface. First and foremost, the optimal longitudinal slip rate torque control can be identified in real time by calculating the derivative and slip rate of the adhesion coefficient. Secondly, the vehicle speed estimation method is also brought. Thirdly, an ideal vehicle simulation model is proposed to verify the algorithm with simulation, and we find that the slip ratio corresponds to the detection of the adhesion limit in real time. Finally, the proposed strategy is applied to traction control system (TCS). The results showed that the method can effectively identify the state of wheel and calculate the optimal slip ratio without wheel speed sensor; in the meantime, it can improve the accelerated stability of electric vehicle with traction control system (TCS). PMID:28662053
Venugopal, G; Deepak, P; Ghosh, Diptasree M; Ramakrishnan, S
2017-11-01
Surface electromyography is a non-invasive technique used for recording the electrical activity of neuromuscular systems. These signals are random, complex and multi-component. There are several techniques to extract information about the force exerted by muscles during any activity. This work attempts to generate surface electromyography signals for various magnitudes of force under isometric non-fatigue and fatigue conditions using a feedback model. The model is based on existing current distribution, volume conductor relations, the feedback control algorithm for rate coding and generation of firing pattern. The result shows that synthetic surface electromyography signals are highly complex in both non-fatigue and fatigue conditions. Furthermore, surface electromyography signals have higher amplitude and lower frequency under fatigue condition. This model can be used to study the influence of various signal parameters under fatigue and non-fatigue conditions.
Achieving Real-Time Tracking Mobile Wireless Sensors Using SE-KFA
NASA Astrophysics Data System (ADS)
Kadhim Hoomod, Haider, Dr.; Al-Chalabi, Sadeem Marouf M.
2018-05-01
Nowadays, Real-Time Achievement is very important in different fields, like: Auto transport control, some medical applications, celestial body tracking, controlling agent movements, detections and monitoring, etc. This can be tested by different kinds of detection devices, which named "sensors" as such as: infrared sensors, ultrasonic sensor, radars in general, laser light sensor, and so like. Ultrasonic Sensor is the most fundamental one and it has great impact and challenges comparing with others especially when navigating (as an agent). In this paper, concerning to the ultrasonic sensor, sensor(s) detecting and delimitation by themselves then navigate inside a limited area to estimating Real-Time using Speed Equation with Kalman Filter Algorithm as an intelligent estimation algorithm. Then trying to calculate the error comparing to the factual rate of tracking. This paper used Ultrasonic Sensor HC-SR04 with Arduino-UNO as Microcontroller.
Slow Orbit Feedback at the ALS Using Matlab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Portmann, G.
1999-03-25
The third generation Advanced Light Source (ALS) produces extremely bright and finely focused photon beams using undulatory, wigglers, and bend magnets. In order to position the photon beams accurately, a slow global orbit feedback system has been developed. The dominant causes of orbit motion at the ALS are temperature variation and insertion device motion. This type of motion can be removed using slow global orbit feedback with a data rate of a few Hertz. The remaining orbit motion in the ALS is only 1-3 micron rms. Slow orbit feedback does not require high computational throughput. At the ALS, the globalmore » orbit feedback algorithm, based on the singular valued decomposition method, is coded in MATLAB and runs on a control room workstation. Using the MATLAB environment to develop, test, and run the storage ring control algorithms has proven to be a fast and efficient way to operate the ALS.« less
Multilayer Statistical Intrusion Detection in Wireless Networks
NASA Astrophysics Data System (ADS)
Hamdi, Mohamed; Meddeb-Makhlouf, Amel; Boudriga, Noureddine
2008-12-01
The rapid proliferation of mobile applications and services has introduced new vulnerabilities that do not exist in fixed wired networks. Traditional security mechanisms, such as access control and encryption, turn out to be inefficient in modern wireless networks. Given the shortcomings of the protection mechanisms, an important research focuses in intrusion detection systems (IDSs). This paper proposes a multilayer statistical intrusion detection framework for wireless networks. The architecture is adequate to wireless networks because the underlying detection models rely on radio parameters and traffic models. Accurate correlation between radio and traffic anomalies allows enhancing the efficiency of the IDS. A radio signal fingerprinting technique based on the maximal overlap discrete wavelet transform (MODWT) is developed. Moreover, a geometric clustering algorithm is presented. Depending on the characteristics of the fingerprinting technique, the clustering algorithm permits to control the false positive and false negative rates. Finally, simulation experiments have been carried out to validate the proposed IDS.
Strategic Control Algorithm Development : Volume 3. Strategic Algorithm Report.
DOT National Transportation Integrated Search
1974-08-01
The strategic algorithm report presents a detailed description of the functional basic strategic control arrival algorithm. This description is independent of a particular computer or language. Contained in this discussion are the geometrical and env...
Neural Generalized Predictive Control: A Newton-Raphson Implementation
NASA Technical Reports Server (NTRS)
Soloway, Donald; Haley, Pamela J.
1997-01-01
An efficient implementation of Generalized Predictive Control using a multi-layer feedforward neural network as the plant's nonlinear model is presented. In using Newton-Raphson as the optimization algorithm, the number of iterations needed for convergence is significantly reduced from other techniques. The main cost of the Newton-Raphson algorithm is in the calculation of the Hessian, but even with this overhead the low iteration numbers make Newton-Raphson faster than other techniques and a viable algorithm for real-time control. This paper presents a detailed derivation of the Neural Generalized Predictive Control algorithm with Newton-Raphson as the minimization algorithm. Simulation results show convergence to a good solution within two iterations and timing data show that real-time control is possible. Comments about the algorithm's implementation are also included.
Multidisciplinary Techniques and Novel Aircraft Control Systems
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Rogers, James L.; Raney, David L.
2000-01-01
The Aircraft Morphing Program at NASA Langley Research Center explores opportunities to improve airframe designs with smart technologies. Two elements of this basic research program are multidisciplinary design optimization (MDO) and advanced flow control. This paper describes examples where MDO techniques such as sensitivity analysis, automatic differentiation, and genetic algorithms contribute to the design of novel control systems. In the test case, the design and use of distributed shape-change devices to provide low-rate maneuvering capability for a tailless aircraft is considered. The ability of MDO to add value to control system development is illustrated using results from several years of research funded by the Aircraft Morphing Program.
Multidisciplinary Techniques and Novel Aircraft Control Systems
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Rogers, James L.; Raney, David L.
2000-01-01
The Aircraft Morphing Program at NASA Langley Research Center explores opportunities to improve airframe designs with smart technologies. Two elements of this basic research program are multidisciplinary design optimization (MDO) and advanced flow control. This paper describes examples where MDO techniques such as sensitivity analysis, automatic differentiation, and genetic algorithms contribute to the design of novel control systems. In the test case, the design and use of distributed shapechange devices to provide low-rate maneuvering capability for a tailless aircraft is considered. The ability of MDO to add value to control system development is illustrated using results from several years of research funded by the Aircraft Morphing Program.
AdaBoost-based algorithm for network intrusion detection.
Hu, Weiming; Hu, Wei; Maybank, Steve
2008-04-01
Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.
Four-dimensional guidance algorithms for aircraft in an air traffic control environment
NASA Technical Reports Server (NTRS)
Pecsvaradi, T.
1975-01-01
Theoretical development and computer implementation of three guidance algorithms are presented. From a small set of input parameters the algorithms generate the ground track, altitude profile, and speed profile required to implement an experimental 4-D guidance system. Given a sequence of waypoints that define a nominal flight path, the first algorithm generates a realistic, flyable ground track consisting of a sequence of straight line segments and circular arcs. Each circular turn is constrained by the minimum turning radius of the aircraft. The ground track and the specified waypoint altitudes are used as inputs to the second algorithm which generates the altitude profile. The altitude profile consists of piecewise constant flight path angle segments, each segment lying within specified upper and lower bounds. The third algorithm generates a feasible speed profile subject to constraints on the rate of change in speed, permissible speed ranges, and effects of wind. Flight path parameters are then combined into a chronological sequence to form the 4-D guidance vectors. These vectors can be used to drive the autopilot/autothrottle of the aircraft so that a 4-D flight path could be tracked completely automatically; or these vectors may be used to drive the flight director and other cockpit displays, thereby enabling the pilot to track a 4-D flight path manually.
Respiration-rate estimation of a moving target using impulse-based ultra wideband radars.
Sharafi, Azadeh; Baboli, Mehran; Eshghi, Mohammad; Ahmadian, Alireza
2012-03-01
Recently, Ultra-wide band signals have become attractive for their particular advantage of having high spatial resolution and good penetration ability which makes them suitable in medical applications. One of these applications is wireless detection of heart rate and respiration rate. Two hypothesis of static environment and fixed patient are considered in the method presented in previous literatures which are not valid for long term monitoring of ambulant patients. In this article, a new method to detect the respiration rate of a moving target is presented. The first algorithm is applied to the simulated and experimental data for detecting respiration rate of a fixed target. Then, the second algorithm is developed to detect respiration rate of a moving target. The proposed algorithm uses correlation for body movement cancellation, and then detects the respiration rate based on energy in frequency domain. The results of algorithm prove an accuracy of 98.4 and 97% in simulated and experimental data, respectively.
NASA Astrophysics Data System (ADS)
Xie, Yanan; Zhou, Mingliang; Pan, Dengke
2017-10-01
The forward-scattering model is introduced to describe the response of normalized radar cross section (NRCS) of precipitation with synthetic aperture radar (SAR). Since the distribution of near-surface rainfall is related to the rate of near-surface rainfall and horizontal distribution factor, a retrieval algorithm called modified regression empirical and model-oriented statistical (M-M) based on the volterra integration theory is proposed. Compared with the model-oriented statistical and volterra integration (MOSVI) algorithm, the biggest difference is that the M-M algorithm is based on the modified regression empirical algorithm rather than the linear regression formula to retrieve the value of near-surface rainfall rate. Half of the empirical parameters are reduced in the weighted integral work and a smaller average relative error is received while the rainfall rate is less than 100 mm/h. Therefore, the algorithm proposed in this paper can obtain high-precision rainfall information.
Passive microwave algorithm development and evaluation
NASA Technical Reports Server (NTRS)
Petty, Grant W.
1995-01-01
The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.
The analysis of convolutional codes via the extended Smith algorithm
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Onyszchuk, I.
1993-01-01
Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Less, Brennan; Walker, Iain; Ticci, Sara
Past field research and simulation studies have shown that high performance homes experience elevated indoor humidity levels for substantial portions of the year in humid climates. This is largely the result of lower sensible cooling loads, which reduces the moisture removed by the cooling system. These elevated humidity levels lead to concerns about occupant comfort, health and building durability. Use of mechanical ventilation at rates specified in ASHRAE Standard 62.2-2013 are often cited as an additional contributor to humidity problems in these homes. Past research has explored solutions, including supplemental dehumidification, cooling system operational enhancements and ventilation system design (e.g.,more » ERV, supply, exhaust, etc.). This project’s goal is to develop and demonstrate (through simulations) smart ventilation strategies that can contribute to humidity control in high performance homes. These strategies must maintain IAQ via equivalence with ASHRAE Standard 62.2-2013. To be acceptable they must not result in excessive energy use. Smart controls will be compared with dehumidifier energy and moisture performance. This work explores the development and performance of smart algorithms for control of mechanical ventilation systems, with the objective of reducing high humidity in modern high performance residences. Simulations of DOE Zero-Energy Ready homes were performed using the REGCAP simulation tool. Control strategies were developed and tested using the Residential Integrated Ventilation (RIVEC) controller, which tracks pollutant exposure in real-time and controls ventilation to provide an equivalent exposure on an annual basis to homes meeting ASHRAE 62.2-2013. RIVEC is used to increase or decrease the real-time ventilation rate to reduce moisture transport into the home or increase moisture removal. This approach was implemented for no-, one- and two-sensor strategies, paired with a variety of control approaches in six humid climates (Miami, Orlando, Houston, Charleston, Memphis and Baltimore). The control options were compared to a baseline system that supplies outdoor air to a central forced air cooling (and heating) system (CFIS) that is often used in hot humid climates. Simulations were performed with CFIS ventilation systems operating on a 33% duty-cycle, consistent with 62.2-2013. The CFIS outside airflow rates were set to 0%, 50% and 100% of 62.2-2013 requirements to explore effects of ventilation rate on indoor high humidity. These simulations were performed with and without a dehumidifier in the model. Ten control algorithms were developed and tested. Analysis of outdoor humidity patterns facilitated smart control development. It was found that outdoor humidity varies most strongly seasonally—by month of the year—and that all locations follow the similar pattern of much higher humidity during summer. Daily and hourly variations in outdoor humidity were found to be progressively smaller than the monthly seasonal variation. Patterns in hourly humidity are driven by diurnal daily patterns, so they were predictable but small, and were unlikely to provide much control benefit. Variation in outdoor humidity between days was larger, but unpredictable, except by much more complex climate models. We determined that no-sensor strategies might be able to take advantage of seasonal patterns in humidity, but that real-time smart controls were required to capture variation between days. Sensor-based approaches are also required to respond dynamically to indoor conditions and variations not considered in our analysis. All smart controls face trade-offs between sensor accuracy, cost, complexity and robustness.« less
Network congestion control algorithm based on Actor-Critic reinforcement learning model
NASA Astrophysics Data System (ADS)
Xu, Tao; Gong, Lina; Zhang, Wei; Li, Xuhong; Wang, Xia; Pan, Wenwen
2018-04-01
Aiming at the network congestion control problem, a congestion control algorithm based on Actor-Critic reinforcement learning model is designed. Through the genetic algorithm in the congestion control strategy, the network congestion problems can be better found and prevented. According to Actor-Critic reinforcement learning, the simulation experiment of network congestion control algorithm is designed. The simulation experiments verify that the AQM controller can predict the dynamic characteristics of the network system. Moreover, the learning strategy is adopted to optimize the network performance, and the dropping probability of packets is adaptively adjusted so as to improve the network performance and avoid congestion. Based on the above finding, it is concluded that the network congestion control algorithm based on Actor-Critic reinforcement learning model can effectively avoid the occurrence of TCP network congestion.
Photovoltaic Cells Mppt Algorithm and Design of Controller Monitoring System
NASA Astrophysics Data System (ADS)
Meng, X. Z.; Feng, H. B.
2017-10-01
This paper combined the advantages of each maximum power point tracking (MPPT) algorithm, put forward a kind of algorithm with higher speed and higher precision, based on this algorithm designed a maximum power point tracking controller with ARM. The controller, communication technology and PC software formed a control system. Results of the simulation and experiment showed that the process of maximum power tracking was effective, and the system was stable.
Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed
NASA Technical Reports Server (NTRS)
Tian, Ye; Song, Qi; Cattafesta, Louis
2005-01-01
This report summarizes the activities on "Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed." The work summarized consists primarily of two parts. The first part summarizes our previous work and the extensions to adaptive ID and control algorithms. The second part concentrates on the validation of adaptive algorithms by applying them to a vibration beam test bed. Extensions to flow control problems are discussed.
Detecting Anomalies in Process Control Networks
NASA Astrophysics Data System (ADS)
Rrushi, Julian; Kang, Kyoung-Don
This paper presents the estimation-inspection algorithm, a statistical algorithm for anomaly detection in process control networks. The algorithm determines if the payload of a network packet that is about to be processed by a control system is normal or abnormal based on the effect that the packet will have on a variable stored in control system memory. The estimation part of the algorithm uses logistic regression integrated with maximum likelihood estimation in an inductive machine learning process to estimate a series of statistical parameters; these parameters are used in conjunction with logistic regression formulas to form a probability mass function for each variable stored in control system memory. The inspection part of the algorithm uses the probability mass functions to estimate the normalcy probability of a specific value that a network packet writes to a variable. Experimental results demonstrate that the algorithm is very effective at detecting anomalies in process control networks.
Real-Time Feedback Control of Flow-Induced Cavity Tones. Part 2; Adaptive Control
NASA Technical Reports Server (NTRS)
Kegerise, M. A.; Cabell, R. H.; Cattafesta, L. N., III
2006-01-01
An adaptive generalized predictive control (GPC) algorithm was formulated and applied to the cavity flow-tone problem. The algorithm employs gradient descent to update the GPC coefficients at each time step. Past input-output data and an estimate of the open-loop pulse response sequence are all that is needed to implement the algorithm for application at fixed Mach numbers. Transient measurements made during controller adaptation revealed that the controller coefficients converged to a steady state in the mean, and this implies that adaptation can be turned off at some point with no degradation in control performance. When converged, the control algorithm demonstrated multiple Rossiter mode suppression at fixed Mach numbers ranging from 0.275 to 0.38. However, as in the case of fixed-gain GPC, the adaptive GPC performance was limited by spillover in sidebands around the suppressed Rossiter modes. The algorithm was also able to maintain suppression of multiple cavity tones as the freestream Mach number was varied over a modest range (0.275 to 0.29). Beyond this range, stable operation of the control algorithm was not possible due to the fixed plant model in the algorithm.
High performance genetic algorithm for VLSI circuit partitioning
NASA Astrophysics Data System (ADS)
Dinu, Simona
2016-12-01
Partitioning is one of the biggest challenges in computer-aided design for VLSI circuits (very large-scale integrated circuits). This work address the min-cut balanced circuit partitioning problem- dividing the graph that models the circuit into almost equal sized k sub-graphs while minimizing the number of edges cut i.e. minimizing the number of edges connecting the sub-graphs. The problem may be formulated as a combinatorial optimization problem. Experimental studies in the literature have shown the problem to be NP-hard and thus it is important to design an efficient heuristic algorithm to solve it. The approach proposed in this study is a parallel implementation of a genetic algorithm, namely an island model. The information exchange between the evolving subpopulations is modeled using a fuzzy controller, which determines an optimal balance between exploration and exploitation of the solution space. The results of simulations show that the proposed algorithm outperforms the standard sequential genetic algorithm both in terms of solution quality and convergence speed. As a direction for future study, this research can be further extended to incorporate local search operators which should include problem-specific knowledge. In addition, the adaptive configuration of mutation and crossover rates is another guidance for future research.
A new algorithm for finding survival coefficients employed in reliability equations
NASA Technical Reports Server (NTRS)
Bouricius, W. G.; Flehinger, B. J.
1973-01-01
Product reliabilities are predicted from past failure rates and reasonable estimate of future failure rates. Algorithm is used to calculate probability that product will function correctly. Algorithm sums the probabilities of each survival pattern and number of permutations for that pattern, over all possible ways in which product can survive.
Jacobs, Peter G.; El Youssef, Joseph; Castle, Jessica; Bakhtiani, Parkash; Branigan, Deborah; Breen, Matthew; Bauer, David; Preiser, Nicholas; Leonard, Gerald; Stonex, Tara; Preiser, Nicholas; Ward, W. Kenneth
2014-01-01
Automated control of blood glucose in patients with type 1 diabetes has not yet been fully implemented. The aim of this study was to design and clinically evaluate a system that integrates a control algorithm with off-the-shelf subcutaneous sensors and pumps to automate the delivery of the hormones glucagon and insulin in response to continuous glucose sensor measurements. The automated component of the system runs an adaptive proportional derivative control algorithm which determines hormone delivery rates based on the sensed glucose measurements and the meal announcements by the patient. We provide details about the system design and the control algorithm, which incorporates both a fading memory proportional derivative controller (FMPD) and an adaptive system for estimating changing sensitivity to insulin based on a glucoregulatory model of insulin action. For an inpatient study carried out in eight subjects using Dexcom SEVEN PLUS sensors, pre-study HbA1c averaged 7.6, which translates to an estimated average glucose of 171 mg/dL. In contrast, during use of the automated system, after initial stabilization, glucose averaged 145 mg/dL and subjects were kept within the euglycemic range (between 70 and 180 mg/dL) for 73.1% of the time, indicating improved glycemic control. A further study on five additional subjects in which we used a newer and more reliable glucose sensor (Dexcom G4 PLATINUM) and made improvements to the insulin and glucagon pump communication system resulted in elimination of hypoglycemic events. For this G4 study, the system was able to maintain subjects’ glucose levels within the near-euglycemic range for 71.6% of the study duration and the mean venous glucose level was 151 mg/dL. PMID:24835122
Automatic Incubator-type Temperature Control System for Brain Hypothermia Treatment
NASA Astrophysics Data System (ADS)
Gaohua, Lu; Wakamatsu, Hidetoshi
An automatic air-cooling incubator is proposed to replace the manual water-cooling blanket to control the brain tissue temperature for brain hypothermia treatment. Its feasibility is theoretically discussed as follows: First, an adult patient with the cooling incubator is modeled as a linear dynamical patient-incubator biothermal system. The patient is represented by an 18-compartment structure and described by its state equations. The air-cooling incubator provides almost same cooling effect as the water-cooling blanket, if a light breeze of speed around 3 m/s is circulated in the incubator. Then, in order to control the brain temperature automatically, an adaptive-optimal control algorithm is adopted, while the patient-blanket therapeutic system is considered as a reference model. Finally, the brain temperature of the patient-incubator biothermal system is controlled to follow up the given reference temperature course, in which an adaptive algorithm is confirmed useful for unknown environmental change and/or metabolic rate change of the patient in the incubating system. Thus, the present work ensures the development of the automatic air-cooling incubator for a better temperature regulation of the brain hypothermia treatment in ICU.
NASA Astrophysics Data System (ADS)
Juhn, J.-W.; Lee, K. C.; Hwang, Y. S.; Domier, C. W.; Luhmann, N. C.; Leblanc, B. P.; Mueller, D.; Gates, D. A.; Kaita, R.
2010-10-01
The far infrared tangential interferometer/polarimeter (FIReTIP) of the National Spherical Torus Experiment (NSTX) has been set up to provide reliable electron density signals for a real-time density feedback control system. This work consists of two main parts: suppression of the fringe jumps that have been prohibiting the plasma density from use in the direct feedback to actuators and the conceptual design of a density feedback control system including the FIReTIP, control hardware, and software that takes advantage of the NSTX plasma control system (PCS). By investigating numerous shot data after July 2009 when the new electronics were installed, fringe jumps in the FIReTIP are well characterized, and consequently the suppressing algorithms are working properly as shown in comparisons with the Thomson scattering diagnostic. This approach is also applicable to signals taken at a 5 kHz sampling rate, which is a fundamental constraint imposed by the digitizers providing inputs to the PCS. The fringe jump correction algorithm, as well as safety and feedback modules, will be included as submodules either in the gas injection system category or a new category of density in the PCS.
An algorithm for the kinetics of tire pyrolysis under different heating rates.
Quek, Augustine; Balasubramanian, Rajashekhar
2009-07-15
Tires exhibit different kinetic behaviors when pyrolyzed under different heating rates. A new algorithm has been developed to investigate pyrolysis behavior of scrap tires. The algorithm includes heat and mass transfer equations to account for the different extents of thermal lag as the tire is heated at different heating rates. The algorithm uses an iterative approach to fit model equations to experimental data to obtain quantitative values of kinetic parameters. These parameters describe the pyrolysis process well, with good agreement (r(2)>0.96) between the model and experimental data when the model is applied to three different brands of automobile tires heated under five different heating rates in a pure nitrogen atmosphere. The model agrees with other researchers' results that frequencies factors increased and time constants decreased with increasing heating rates. The model also shows the change in the behavior of individual tire components when the heating rates are increased above 30 K min(-1). This result indicates that heating rates, rather than temperature, can significantly affect pyrolysis reactions. This algorithm is simple in structure and yet accurate in describing tire pyrolysis under a wide range of heating rates (10-50 K min(-1)). It improves our understanding of the tire pyrolysis process by showing the relationship between the heating rate and the many components in a tire that depolymerize as parallel reactions.
Computing danger zones for provably safe closely spaced parallel approaches: Theory and experiment
NASA Astrophysics Data System (ADS)
Teo, Rodney
In poor visibility, paired approaches to airports with closely spaced parallel runways are not permitted, thus halving the arrival rate. With Global Positioning System technology, datalinks and cockpit displays, this could be averted. One important problem is ensuring safety during a blundered approach by one aircraft. This is on-going research. A danger zone around the blunderer is required. If the correct danger zone could be calculated, then it would be possible to get 100% of clear-day capacity in poor-visibility days even on 750 foot runways. The danger zones vary significantly (during an approach) and calculating them in real time would be very significant. Approximations (e.g. outer bounds) are not good enough. This thesis presents a way to calculate these danger zones in real time for a very broad class of blunder trajectories. The approach in this thesis differs from others in that it guarantees safety for any possible blunder trajectory as long as the speeds and turn rates of the blunder are within certain bounds. In addition, the approach considers all emergency evasive maneuvers whose speeds and turn rates are within certain bounds about a nominal emergency evasive maneuver. For all combinations of these blunder and evasive maneuver trajectories, it guarantees that the evasive maneuver is safe. For more than 1 million simulation runs, the algorithm shows a 100% rate of Successful Alerts and a 0% rate of Collisions Given an Alert. As an experimental testbed, two 10-ft wingspan fully autonomous unmanned aerial vehicles and a ground station are developed together with J. S. Jang. The development includes the design and flight testing of automatic controllers. The testbed is used to demonstrate the algorithm implementation through an autonomous closely spaced parallel approach, with one aircraft programmed to blunder. The other aircraft responds according to the result of the algorithm on board it and evades autonomously when required. This experimental demonstration is successfully conducted, showing the implementation of the algorithm, in particular, demonstrating that it can run in real time. Finally; with the necessary sensors and datalink, and the appropriate procedures in place, the algorithm developed in this thesis will enable 100% of clear-day capacity in poor-visibility days even on 750 foot runways.
Implementation of smart phone video plethysmography and dependence on lighting parameters.
Fletcher, Richard Ribón; Chamberlain, Daniel; Paggi, Nicholas; Deng, Xinyue
2015-08-01
The remote measurement of heart rate (HR) and heart rate variability (HRV) via a digital camera (video plethysmography) has emerged as an area of great interest for biomedical and health applications. While a few implementations of video plethysmography have been demonstrated on smart phones under controlled lighting conditions, it has been challenging to create a general scalable solution due to the large variability in smart phone hardware performance, software architecture, and the variable response to lighting parameters. In this context, we present a selfcontained smart phone implementation of video plethysmography for Android OS, which employs both stochastic and deterministic algorithms, and we use this to study the effect of lighting parameters (illuminance, color spectrum) on the accuracy of the remote HR measurement. Using two different phone models, we present the median HR error for five different video plethysmography algorithms under three different types of lighting (natural sunlight, compact fluorescent, and halogen incandescent) and variations in brightness. For most algorithms, we found the optimum light brightness to be in the range 1000-4000 lux and the optimum lighting types to be compact fluorescent and natural light. Moderate errors were found for most algorithms with some devices under conditions of low-brightness (<;500 lux) and highbrightness (>4000 lux). Our analysis also identified camera frame rate jitter as a major source of variability and error across different phone models, but this can be largely corrected through non-linear resampling. Based on testing with six human subjects, our real-time Android implementation successfully predicted the measured HR with a median error of -0.31 bpm, and an inter-quartile range of 2.1bpm.
Concept for estimating mitochondrial DNA haplogroups using a maximum likelihood approach (EMMA)☆
Röck, Alexander W.; Dür, Arne; van Oven, Mannis; Parson, Walther
2013-01-01
The assignment of haplogroups to mitochondrial DNA haplotypes contributes substantial value for quality control, not only in forensic genetics but also in population and medical genetics. The availability of Phylotree, a widely accepted phylogenetic tree of human mitochondrial DNA lineages, led to the development of several (semi-)automated software solutions for haplogrouping. However, currently existing haplogrouping tools only make use of haplogroup-defining mutations, whereas private mutations (beyond the haplogroup level) can be additionally informative allowing for enhanced haplogroup assignment. This is especially relevant in the case of (partial) control region sequences, which are mainly used in forensics. The present study makes three major contributions toward a more reliable, semi-automated estimation of mitochondrial haplogroups. First, a quality-controlled database consisting of 14,990 full mtGenomes downloaded from GenBank was compiled. Together with Phylotree, these mtGenomes serve as a reference database for haplogroup estimates. Second, the concept of fluctuation rates, i.e. a maximum likelihood estimation of the stability of mutations based on 19,171 full control region haplotypes for which raw lane data is available, is presented. Finally, an algorithm for estimating the haplogroup of an mtDNA sequence based on the combined database of full mtGenomes and Phylotree, which also incorporates the empirically determined fluctuation rates, is brought forward. On the basis of examples from the literature and EMPOP, the algorithm is not only validated, but both the strength of this approach and its utility for quality control of mitochondrial haplotypes is also demonstrated. PMID:23948335
Hoenigl, Martin; Graff-Zivin, Joshua; Little, Susan J.
2016-01-01
Background. In nonhealthcare settings, widespread screening for acute human immunodeficiency virus (HIV) infection (AHI) is limited by cost and decision algorithms to better prioritize use of resources. Comparative cost analyses for available strategies are lacking. Methods. To determine cost-effectiveness of community-based testing strategies, we evaluated annual costs of 3 algorithms that detect AHI based on HIV nucleic acid amplification testing (EarlyTest algorithm) or on HIV p24 antigen (Ag) detection via Architect (Architect algorithm) or Determine (Determine algorithm) as well as 1 algorithm that relies on HIV antibody testing alone (Antibody algorithm). The cost model used data on men who have sex with men (MSM) undergoing community-based AHI screening in San Diego, California. Incremental cost-effectiveness ratios (ICERs) per diagnosis of AHI were calculated for programs with HIV prevalence rates between 0.1% and 2.9%. Results. Among MSM in San Diego, EarlyTest was cost-savings (ie, ICERs per AHI diagnosis less than $13.000) when compared with the 3 other algorithms. Cost analyses relative to regional HIV prevalence showed that EarlyTest was cost-effective (ie, ICERs less than $69.547) for similar populations of MSM with an HIV prevalence rate >0.4%; Architect was the second best alternative for HIV prevalence rates >0.6%. Conclusions. Identification of AHI by the dual EarlyTest screening algorithm is likely to be cost-effective not only among at-risk MSM in San Diego but also among similar populations of MSM with HIV prevalence rates >0.4%. PMID:26508512
Shinnar, Shlomo; Gloss, David; Alldredge, Brian; Arya, Ravindra; Bainbridge, Jacquelyn; Bare, Mary; Bleck, Thomas; Dodson, W. Edwin; Garrity, Lisa; Jagoda, Andy; Lowenstein, Daniel; Pellock, John; Riviello, James; Sloan, Edward; Treiman, David M.
2016-01-01
CONTEXT: The optimal pharmacologic treatment for early convulsive status epilepticus is unclear. OBJECTIVE: To analyze efficacy, tolerability and safety data for anticonvulsant treatment of children and adults with convulsive status epilepticus and use this analysis to develop an evidence-based treatment algorithm. DATA SOURCES: Structured literature review using MEDLINE, Embase, Current Contents, and Cochrane library supplemented with article reference lists. STUDY SELECTION: Randomized controlled trials of anticonvulsant treatment for seizures lasting longer than 5 minutes. DATA EXTRACTION: Individual studies were rated using predefined criteria and these results were used to form recommendations, conclusions, and an evidence-based treatment algorithm. RESULTS: A total of 38 randomized controlled trials were identified, rated and contributed to the assessment. Only four trials were considered to have class I evidence of efficacy. Two studies were rated as class II and the remaining 32 were judged to have class III evidence. In adults with convulsive status epilepticus, intramuscular midazolam, intravenous lorazepam, intravenous diazepam and intravenous phenobarbital are established as efficacious as initial therapy (Level A). Intramuscular midazolam has superior effectiveness compared to intravenous lorazepam in adults with convulsive status epilepticus without established intravenous access (Level A). In children, intravenous lorazepam and intravenous diazepam are established as efficacious at stopping seizures lasting at least 5 minutes (Level A) while rectal diazepam, intramuscular midazolam, intranasal midazolam, and buccal midazolam are probably effective (Level B). No significant difference in effectiveness has been demonstrated between intravenous lorazepam and intravenous diazepam in adults or children with convulsive status epilepticus (Level A). Respiratory and cardiac symptoms are the most commonly encountered treatment-emergent adverse events associated with intravenous anticonvulsant drug administration in adults with convulsive status epilepticus (Level A). The rate of respiratory depression in patients with convulsive status epilepticus treated with benzodiazepines is lower than in patients with convulsive status epilepticus treated with placebo indicating that respiratory problems are an important consequence of untreated convulsive status epilepticus (Level A). When both are available, fosphenytoin is preferred over phenytoin based on tolerability but phenytoin is an acceptable alternative (Level A). In adults, compared to the first therapy, the second therapy is less effective while the third therapy is substantially less effective (Level A). In children, the second therapy appears less effective and there are no data about third therapy efficacy (Level C). The evidence was synthesized into a treatment algorithm. CONCLUSIONS: Despite the paucity of well-designed randomized controlled trials, practical conclusions and an integrated treatment algorithm for the treatment of convulsive status epilepticus across the age spectrum (infants through adults) can be constructed. Multicenter, multinational efforts are needed to design, conduct and analyze additional randomized controlled trials that can answer the many outstanding clinically relevant questions identified in this guideline. PMID:26900382
NASA Astrophysics Data System (ADS)
Nemirsky, Kristofer Kevin
In this thesis, the history and evolution of rotor aircraft with simulated annealing-based PID application were reviewed and quadcopter dynamics are presented. The dynamics of a quadcopter were then modeled, analyzed, and linearized. A cascaded loop architecture with PID controllers was used to stabilize the plant dynamics, which was improved upon through the application of simulated annealing (SA). A Simulink model was developed to test the controllers and verify the functionality of the proposed control system design. In addition, the data that the Simulink model provided were compared with flight data to present the validity of derived dynamics as a proper mathematical model representing the true dynamics of the quadcopter system. Then, the SA-based global optimization procedure was applied to obtain optimized PID parameters. It was observed that the tuned gains through the SA algorithm produced a better performing PID controller than the original manually tuned one. Next, we investigated the uncertain dynamics of the quadcopter setup. After adding uncertainty to the gyroscopic effects associated with pitch-and-roll rate dynamics, the controllers were shown to be robust against the added uncertainty. A discussion follows to summarize SA-based algorithm PID controller design and performance outcomes. Lastly, future work on SA application on multi-input-multi-output (MIMO) systems is briefly discussed.
NASA Astrophysics Data System (ADS)
Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan
2018-03-01
False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.
Optimizing the learning rate for adaptive estimation of neural encoding models
2018-01-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069
Optimizing the learning rate for adaptive estimation of neural encoding models.
Hsieh, Han-Lin; Shanechi, Maryam M
2018-05-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.
Effect of Common Faults on the Performance of Different Types of Vapor Compression Systems
Du, Zhimin; Domanski, Piotr A.; Payne, W. Vance
2016-01-01
The effect of faults on the cooling capacity, coefficient of performance, and sensible heat ratio, was analyzed and compared for five split and rooftop systems, which use different types of expansion devices, compressors and refrigerants. The study applied multivariable polynomial and normalized performance models, which were developed for the studied systems for both fault-free and faulty conditions based on measurements obtained in a laboratory under controlled conditions. The analysis indicated differences in responses and trends between the studied systems, which underscores the challenge to devise a universal FDD algorithm for all vapor compression systems and the difficulty to develop a methodology for rating the performance of different FDD algorithms. PMID:26929732
Effect of Common Faults on the Performance of Different Types of Vapor Compression Systems.
Du, Zhimin; Domanski, Piotr A; Payne, W Vance
2016-04-05
The effect of faults on the cooling capacity, coefficient of performance, and sensible heat ratio, was analyzed and compared for five split and rooftop systems, which use different types of expansion devices, compressors and refrigerants. The study applied multivariable polynomial and normalized performance models, which were developed for the studied systems for both fault-free and faulty conditions based on measurements obtained in a laboratory under controlled conditions. The analysis indicated differences in responses and trends between the studied systems, which underscores the challenge to devise a universal FDD algorithm for all vapor compression systems and the difficulty to develop a methodology for rating the performance of different FDD algorithms.
On the efficient and reliable numerical solution of rate-and-state friction problems
NASA Astrophysics Data System (ADS)
Pipping, Elias; Kornhuber, Ralf; Rosenau, Matthias; Oncken, Onno
2016-03-01
We present a mathematically consistent numerical algorithm for the simulation of earthquake rupture with rate-and-state friction. Its main features are adaptive time stepping, a novel algebraic solution algorithm involving nonlinear multigrid and a fixed point iteration for the rate-and-state decoupling. The algorithm is applied to a laboratory scale subduction zone which allows us to compare our simulations with experimental results. Using physical parameters from the experiment, we find a good fit of recurrence time of slip events as well as their rupture width and peak slip. Computations in 3-D confirm efficiency and robustness of our algorithm.
NASA Astrophysics Data System (ADS)
Choy, Vanessa; Tang, Kee; Wachsmuth, Jeff; Chopra, Rajiv; Bronskill, Michael
2006-05-01
Transurethral thermal therapy offers a minimally invasive alternative for the treatment of prostate diseases including benign prostate hyperplasia (BPH) and prostate cancer. Accurate heating of a targeted region of the gland can be achieved through the use of a rotating directional heating source incorporating planar ultrasound transducers, and the implementation of active temperature feedback along the beam direction during heating provided by magnetic resonance (MR) thermometry. The performance of this control method with practical spatial, temporal, and temperature resolution (such as angular alignment, spatial resolution, update rate for temperature feedback (imaging time), and the presence of noise) for thermal feedback using a clinical 1.5 T MR scanner was investigated in simulations. As expected, the control algorithm was most sensitive to the presence of noise, with noticeable degradation in its performance above ±2°C of temperature uncertainty. With respect to temporal resolution, acceptable performance was achieved at update rates of 5s or faster. The control algorithm was relatively insensitive to reduced spatial resolution due to the broad nature of the heating pattern produced by the heating applicator, this provides an opportunity to improve signal-to-noise ratio (SNR). The overall simulation results confirm that existing clinical 1.5T MR imagers are capable of providing adequate temperature feedback for transurethral thermal therapy without special pulse sequences or enhanced imaging hardware.
Learning control system design based on 2-D theory - An application to parallel link manipulator
NASA Technical Reports Server (NTRS)
Geng, Z.; Carroll, R. L.; Lee, J. D.; Haynes, L. H.
1990-01-01
An approach to iterative learning control system design based on two-dimensional system theory is presented. A two-dimensional model for the iterative learning control system which reveals the connections between learning control systems and two-dimensional system theory is established. A learning control algorithm is proposed, and the convergence of learning using this algorithm is guaranteed by two-dimensional stability. The learning algorithm is applied successfully to the trajectory tracking control problem for a parallel link robot manipulator. The excellent performance of this learning algorithm is demonstrated by the computer simulation results.
Constrained independent component analysis approach to nonobtrusive pulse rate measurements
NASA Astrophysics Data System (ADS)
Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.
2012-07-01
Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.
Constrained independent component analysis approach to nonobtrusive pulse rate measurements.
Tsouri, Gill R; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K
2012-07-01
Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.
A pheromone-rate-based analysis on the convergence time of ACO algorithm.
Huang, Han; Wu, Chun-Guo; Hao, Zhi-Feng
2009-08-01
Ant colony optimization (ACO) has widely been applied to solve combinatorial optimization problems in recent years. There are few studies, however, on its convergence time, which reflects how many iteration times ACO algorithms spend in converging to the optimal solution. Based on the absorbing Markov chain model, we analyze the ACO convergence time in this paper. First, we present a general result for the estimation of convergence time to reveal the relationship between convergence time and pheromone rate. This general result is then extended to a two-step analysis of the convergence time, which includes the following: 1) the iteration time that the pheromone rate spends on reaching the objective value and 2) the convergence time that is calculated with the objective pheromone rate in expectation. Furthermore, four brief ACO algorithms are investigated by using the proposed theoretical results as case studies. Finally, the conclusions of the case studies that the pheromone rate and its deviation determine the expected convergence time are numerically verified with the experiment results of four one-ant ACO algorithms and four ten-ant ACO algorithms.