In-Flight Pitot-Static Calibration
NASA Technical Reports Server (NTRS)
Foster, John V. (Inventor); Cunningham, Kevin (Inventor)
2016-01-01
A GPS-based pitot-static calibration system uses global output-error optimization. High data rate measurements of static and total pressure, ambient air conditions, and GPS-based ground speed measurements are used to compute pitot-static pressure errors over a range of airspeed. System identification methods rapidly compute optimal pressure error models with defined confidence intervals.
A modified adjoint-based grid adaptation and error correction method for unstructured grid
NASA Astrophysics Data System (ADS)
Cui, Pengcheng; Li, Bin; Tang, Jing; Chen, Jiangtao; Deng, Youqi
2018-05-01
Grid adaptation is an important strategy to improve the accuracy of output functions (e.g. drag, lift, etc.) in computational fluid dynamics (CFD) analysis and design applications. This paper presents a modified robust grid adaptation and error correction method for reducing simulation errors in integral outputs. The procedure is based on discrete adjoint optimization theory in which the estimated global error of output functions can be directly related to the local residual error. According to this relationship, local residual error contribution can be used as an indicator in a grid adaptation strategy designed to generate refined grids for accurately estimating the output functions. This grid adaptation and error correction method is applied to subsonic and supersonic simulations around three-dimensional configurations. Numerical results demonstrate that the sensitive grids to output functions are detected and refined after grid adaptation, and the accuracy of output functions is obviously improved after error correction. The proposed grid adaptation and error correction method is shown to compare very favorably in terms of output accuracy and computational efficiency relative to the traditional featured-based grid adaptation.
NASA Astrophysics Data System (ADS)
Lu, Yanrong; Liao, Fucheng; Deng, Jiamei; Liu, Huiyang
2017-09-01
This paper investigates the cooperative global optimal preview tracking problem of linear multi-agent systems under the assumption that the output of a leader is a previewable periodic signal and the topology graph contains a directed spanning tree. First, a type of distributed internal model is introduced, and the cooperative preview tracking problem is converted to a global optimal regulation problem of an augmented system. Second, an optimal controller, which can guarantee the asymptotic stability of the augmented system, is obtained by means of the standard linear quadratic optimal preview control theory. Third, on the basis of proving the existence conditions of the controller, sufficient conditions are given for the original problem to be solvable, meanwhile a cooperative global optimal controller with error integral and preview compensation is derived. Finally, the validity of theoretical results is demonstrated by a numerical simulation.
A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers
NASA Technical Reports Server (NTRS)
Woodard, Stanley E.; Garg, Devendra P.
1998-01-01
This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.
NASA Technical Reports Server (NTRS)
Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)
2002-01-01
We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.
Research of converter transformer fault diagnosis based on improved PSO-BP algorithm
NASA Astrophysics Data System (ADS)
Long, Qi; Guo, Shuyong; Li, Qing; Sun, Yong; Li, Yi; Fan, Youping
2017-09-01
To overcome those disadvantages that BP (Back Propagation) neural network and conventional Particle Swarm Optimization (PSO) converge at the global best particle repeatedly in early stage and is easy trapped in local optima and with low diagnosis accuracy when being applied in converter transformer fault diagnosis, we come up with the improved PSO-BP neural network to improve the accuracy rate. This algorithm improves the inertia weight Equation by using the attenuation strategy based on concave function to avoid the premature convergence of PSO algorithm and Time-Varying Acceleration Coefficient (TVAC) strategy was adopted to balance the local search and global search ability. At last the simulation results prove that the proposed approach has a better ability in optimizing BP neural network in terms of network output error, global searching performance and diagnosis accuracy.
Global Radius of Curvature Estimation and Control System for Segmented Mirrors
NASA Technical Reports Server (NTRS)
Rakoczy, John M. (Inventor)
2006-01-01
An apparatus controls positions of plural mirror segments in a segmented mirror with an edge sensor system and a controller. Current mirror segment edge sensor measurements and edge sensor reference measurements are compared with calculated edge sensor bias measurements representing a global radius of curvature. Accumulated prior actuator commands output from an edge sensor control unit are combined with an estimator matrix to form the edge sensor bias measurements. An optimal control matrix unit then accumulates the plurality of edge sensor error signals calculated by the summation unit and outputs the corresponding plurality of actuator commands. The plural mirror actuators respond to the actuator commands by moving respective positions of the mixor segments. A predetermined number of boundary conditions, corresponding to a plurality of hexagonal mirror locations, are removed to afford mathematical matrix calculation.
NASA Astrophysics Data System (ADS)
Kato, Takeyoshi; Sone, Akihito; Shimakage, Toyonari; Suzuoki, Yasuo
A microgrid (MG) is one of the measures for enhancing the high penetration of renewable energy (RE)-based distributed generators (DGs). For constructing a MG economically, the capacity optimization of controllable DGs against RE-based DGs is essential. By using a numerical simulation model developed based on the demonstrative studies on a MG using PAFC and NaS battery as controllable DGs and photovoltaic power generation system (PVS) as a RE-based DG, this study discusses the influence of forecast accuracy of PVS output on the capacity optimization and daily operation evaluated with the cost. The main results are as follows. The required capacity of NaS battery must be increased by 10-40% against the ideal situation without the forecast error of PVS power output. The influence of forecast error on the received grid electricity would not be so significant on annual basis because the positive and negative forecast error varies with days. The annual total cost of facility and operation increases by 2-7% due to the forecast error applied in this study. The impact of forecast error on the facility optimization and operation optimization is almost the same each other at a few percentages, implying that the forecast accuracy should be improved in terms of both the number of times with large forecast error and the average error.
Non-linear dynamic compensation system
NASA Technical Reports Server (NTRS)
Lin, Yu-Hwan (Inventor); Lurie, Boris J. (Inventor)
1992-01-01
A non-linear dynamic compensation subsystem is added in the feedback loop of a high precision optical mirror positioning control system to smoothly alter the control system response bandwidth from a relatively wide response bandwidth optimized for speed of control system response to a bandwidth sufficiently narrow to reduce position errors resulting from the quantization noise inherent in the inductosyn used to measure mirror position. The non-linear dynamic compensation system includes a limiter for limiting the error signal within preselected limits, a compensator for modifying the limiter output to achieve the reduced bandwidth response, and an adder for combining the modified error signal with the difference between the limited and unlimited error signals. The adder output is applied to control system motor so that the system response is optimized for accuracy when the error signal is within the preselected limits, optimized for speed of response when the error signal is substantially beyond the preselected limits and smoothly varied therebetween as the error signal approaches the preselected limits.
NASA Technical Reports Server (NTRS)
Siu, Marie-Michele; Martos, Borja; Foster, John V.
2013-01-01
As part of a joint partnership between the NASA Aviation Safety Program (AvSP) and the University of Tennessee Space Institute (UTSI), research on advanced air data calibration methods has been in progress. This research was initiated to expand a novel pitot-static calibration method that was developed to allow rapid in-flight calibration for the NASA Airborne Subscale Transport Aircraft Research (AirSTAR) facility. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. Subscale flight tests demonstrated small 2-s error bounds with significant reduction in test time compared to other methods. Recent UTSI full scale flight tests have shown airspeed calibrations with the same accuracy or better as the Federal Aviation Administration (FAA) accepted GPS 'four-leg' method in a smaller test area and in less time. The current research was motivated by the desire to extend this method for inflight calibration of angle of attack (AOA) and angle of sideslip (AOS) flow vanes. An instrumented Piper Saratoga research aircraft from the UTSI was used to collect the flight test data and evaluate flight test maneuvers. Results showed that the output-error approach produces good results for flow vane calibration. In addition, maneuvers for pitot-static and flow vane calibration can be integrated to enable simultaneous and efficient testing of each system.
NASA Technical Reports Server (NTRS)
Taylor, Brian R.
2012-01-01
A novel, efficient air data calibration method is proposed for aircraft with limited envelopes. This method uses output-error optimization on three-dimensional inertial velocities to estimate calibration and wind parameters. Calibration parameters are based on assumed calibration models for static pressure, angle of attack, and flank angle. Estimated wind parameters are the north, east, and down components. The only assumptions needed for this method are that the inertial velocities and Euler angles are accurate, the calibration models are correct, and that the steady-state component of wind is constant throughout the maneuver. A two-minute maneuver was designed to excite the aircraft over the range of air data calibration parameters and de-correlate the angle-of-attack bias from the vertical component of wind. Simulation of the X-48B (The Boeing Company, Chicago, Illinois) aircraft was used to validate the method, ultimately using data derived from wind-tunnel testing to simulate the un-calibrated air data measurements. Results from the simulation were accurate and robust to turbulence levels comparable to those observed in flight. Future experiments are planned to evaluate the proposed air data calibration in a flight environment.
NASA Astrophysics Data System (ADS)
Xue, ShiChuan; Wu, JunJie; Xu, Ping; Yang, XueJun
2018-02-01
Quantum computing is a significant computing capability which is superior to classical computing because of its superposition feature. Distinguishing several quantum states from quantum algorithm outputs is often a vital computational task. In most cases, the quantum states tend to be non-orthogonal due to superposition; quantum mechanics has proved that perfect outcomes could not be achieved by measurements, forcing repetitive measurement. Hence, it is important to determine the optimum measuring method which requires fewer repetitions and a lower error rate. However, extending current measurement approaches mainly aiming at quantum cryptography to multi-qubit situations for quantum computing confronts challenges, such as conducting global operations which has considerable costs in the experimental realm. Therefore, in this study, we have proposed an optimum subsystem method to avoid these difficulties. We have provided an analysis of the comparison between the reduced subsystem method and the global minimum error method for two-qubit problems; the conclusions have been verified experimentally. The results showed that the subsystem method could effectively discriminate non-orthogonal two-qubit states, such as separable states, entangled pure states, and mixed states; the cost of the experimental process had been significantly reduced, in most circumstances, with acceptable error rate. We believe the optimal subsystem method is the most valuable and promising approach for multi-qubit quantum computing applications.
Optimized System Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Longman, Richard W.
1999-01-01
In system identification, one usually cares most about finding a model whose outputs are as close as possible to the true system outputs when the same input is applied to both. However, most system identification algorithms do not minimize this output error. Often they minimize model equation error instead, as in typical least-squares fits using a finite-difference model, and it is seen here that this distinction is significant. Here, we develop a set of system identification algorithms that minimize output error for multi-input/multi-output and multi-input/single-output systems. This is done with sequential quadratic programming iterations on the nonlinear least-squares problems, with an eigendecomposition to handle indefinite second partials. This optimization minimizes a nonlinear function of many variables, and hence can converge to local minima. To handle this problem, we start the iterations from the OKID (Observer/Kalman Identification) algorithm result. Not only has OKID proved very effective in practice, it minimizes an output error of an observer which has the property that as the data set gets large, it converges to minimizing the criterion of interest here. Hence, it is a particularly good starting point for the nonlinear iterations here. Examples show that the methods developed here eliminate the bias that is often observed using any system identification methods of either over-estimating or under-estimating the damping of vibration modes in lightly damped structures.
Shape Optimization by Bayesian-Validated Computer-Simulation Surrogates
NASA Technical Reports Server (NTRS)
Patera, Anthony T.
1997-01-01
A nonparametric-validated, surrogate approach to optimization has been applied to the computational optimization of eddy-promoter heat exchangers and to the experimental optimization of a multielement airfoil. In addition to the baseline surrogate framework, a surrogate-Pareto framework has been applied to the two-criteria, eddy-promoter design problem. The Pareto analysis improves the predictability of the surrogate results, preserves generality, and provides a means to rapidly determine design trade-offs. Significant contributions have been made in the geometric description used for the eddy-promoter inclusions as well as to the surrogate framework itself. A level-set based, geometric description has been developed to define the shape of the eddy-promoter inclusions. The level-set technique allows for topology changes (from single-body,eddy-promoter configurations to two-body configurations) without requiring any additional logic. The continuity of the output responses for input variations that cross the boundary between topologies has been demonstrated. Input-output continuity is required for the straightforward application of surrogate techniques in which simplified, interpolative models are fitted through a construction set of data. The surrogate framework developed previously has been extended in a number of ways. First, the formulation for a general, two-output, two-performance metric problem is presented. Surrogates are constructed and validated for the outputs. The performance metrics can be functions of both outputs, as well as explicitly of the inputs, and serve to characterize the design preferences. By segregating the outputs and the performance metrics, an additional level of flexibility is provided to the designer. The validated outputs can be used in future design studies and the error estimates provided by the output validation step still apply, and require no additional appeals to the expensive analysis. Second, a candidate-based a posteriori error analysis capability has been developed which provides probabilistic error estimates on the true performance for a design randomly selected near the surrogate-predicted optimal design.
NASA Astrophysics Data System (ADS)
Wang, Liping; Wang, Boquan; Zhang, Pu; Liu, Minghao; Li, Chuangang
2017-06-01
The study of reservoir deterministic optimal operation can improve the utilization rate of water resource and help the hydropower stations develop more reasonable power generation schedules. However, imprecise forecasting inflow may lead to output error and hinder implementation of power generation schedules. In this paper, output error generated by the uncertainty of the forecasting inflow was regarded as a variable to develop a short-term reservoir optimal operation model for reducing operation risk. To accomplish this, the concept of Value at Risk (VaR) was first applied to present the maximum possible loss of power generation schedules, and then an extreme value theory-genetic algorithm (EVT-GA) was proposed to solve the model. The cascade reservoirs of Yalong River Basin in China were selected as a case study to verify the model, according to the results, different assurance rates of schedules can be derived by the model which can present more flexible options for decision makers, and the highest assurance rate can reach 99%, which is much higher than that without considering output error, 48%. In addition, the model can greatly improve the power generation compared with the original reservoir operation scheme under the same confidence level and risk attitude. Therefore, the model proposed in this paper can significantly improve the effectiveness of power generation schedules and provide a more scientific reference for decision makers.
Global optimization method based on ray tracing to achieve optimum figure error compensation
NASA Astrophysics Data System (ADS)
Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin
2017-02-01
Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.
Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy
NASA Astrophysics Data System (ADS)
Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng
2018-06-01
To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.
NASA Astrophysics Data System (ADS)
Sone, Akihito; Kato, Takeyoshi; Shimakage, Toyonari; Suzuoki, Yasuo
A microgrid (MG) is one of the measures for enhancing the high penetration of renewable energy (RE)-based distributed generators (DGs). If a number of MGs are controlled to maintain the predetermined electricity demand including RE-based DGs as negative demand, they would contribute to supply-demand balancing of whole electric power system. For constructing a MG economically, the capacity optimization of controllable DGs against RE-based DGs is essential. By using a numerical simulation model developed based on a demonstrative study on a MG using PAFC and NaS battery as controllable DGs and photovoltaic power generation system (PVS) as a RE-based DG, this study discusses the influence of forecast accuracy of PVS output on the capacity optimization. Three forecast cases with different accuracy are compared. The main results are as follows. Even with no forecast error during every 30 min. as the ideal forecast method, the required capacity of NaS battery reaches about 40% of PVS capacity for mitigating the instantaneous forecast error within 30 min. The required capacity to compensate for the forecast error is doubled with the actual forecast method. The influence of forecast error can be reduced by adjusting the scheduled power output of controllable DGs according to the weather forecast. Besides, the required capacity can be reduced significantly if the error of balancing control in a MG is acceptable for a few percentages of periods, because the total periods of large forecast error is not so often.
Quantum error-correction failure distributions: Comparison of coherent and stochastic error models
NASA Astrophysics Data System (ADS)
Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.
2017-06-01
We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.
Pacheco, Shaun; Brand, Jonathan F.; Zaverton, Melissa; Milster, Tom; Liang, Rongguang
2015-01-01
A method to design one-dimensional beam-spitting phase gratings with low sensitivity to fabrication errors is described. The method optimizes the phase function of a grating by minimizing the integrated variance of the energy of each output beam over a range of fabrication errors. Numerical results for three 1x9 beam splitting phase gratings are given. Two optimized gratings with low sensitivity to fabrication errors were compared with a grating designed for optimal efficiency. These three gratings were fabricated using gray-scale photolithography. The standard deviation of the 9 outgoing beam energies in the optimized gratings were 2.3 and 3.4 times lower than the optimal efficiency grating. PMID:25969268
CORRELATED AND ZONAL ERRORS OF GLOBAL ASTROMETRIC MISSIONS: A SPHERICAL HARMONIC SOLUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, V. V.; Dorland, B. N.; Gaume, R. A.
We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.
Correlated and Zonal Errors of Global Astrometric Missions: A Spherical Harmonic Solution
NASA Astrophysics Data System (ADS)
Makarov, V. V.; Dorland, B. N.; Gaume, R. A.; Hennessy, G. S.; Berghea, C. T.; Dudik, R. P.; Schmitt, H. R.
2012-07-01
We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
An effective parameter optimization with radiation balance constraints in the CAM5
NASA Astrophysics Data System (ADS)
Wu, L.; Zhang, T.; Qin, Y.; Lin, Y.; Xue, W.; Zhang, M.
2017-12-01
Uncertain parameters in physical parameterizations of General Circulation Models (GCMs) greatly impact model performance. Traditional parameter tuning methods are mostly unconstrained optimization, leading to the simulation results with optimal parameters may not meet the conditions that models have to keep. In this study, the radiation balance constraint is taken as an example, which is involved in the automatic parameter optimization procedure. The Lagrangian multiplier method is used to solve this optimization problem with constrains. In our experiment, we use CAM5 atmosphere model under 5-yr AMIP simulation with prescribed seasonal climatology of SST and sea ice. We consider the synthesized metrics using global means of radiation, precipitation, relative humidity, and temperature as the goal of optimization, and simultaneously consider the conditions that FLUT and FSNTOA should satisfy as constraints. The global average of the output variables FLUT and FSNTOA are set to be approximately equal to 240 Wm-2 in CAM5. Experiment results show that the synthesized metrics is 13.6% better than the control run. At the same time, both FLUT and FSNTOA are close to the constrained conditions. The FLUT condition is well satisfied, which is obviously better than the average annual FLUT obtained with the default parameters. The FSNTOA has a slight deviation from the observed value, but the relative error is less than 7.7‰.
Signature-based store checking buffer
Sridharan, Vilas; Gurumurthi, Sudhanva
2015-06-02
A system and method for optimizing redundant output verification, are provided. A hardware-based store fingerprint buffer receives multiple instances of output from multiple instances of computation. The store fingerprint buffer generates a signature from the content included in the multiple instances of output. When a barrier is reached, the store fingerprint buffer uses the signature to verify the content is error-free.
Eliciting Naturalistic Cortical Responses with a Sensory Prosthesis via Optimized Microstimulation
2016-08-12
error and correlation as metrics amenable to highly efficient convex optimization. This study concentrates on characterizing the neural responses to both...spiking signal. For LFP, distance measures such as the traditional mean-squared error and cross- correlation can be used, whereas distances between spike...with parameters that describe their associated temporal dynamics and relations to the observed output. A description of the model follows, but we
Optimization Under Uncertainty for Electronics Cooling Design
NASA Astrophysics Data System (ADS)
Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.
Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...
NASA Astrophysics Data System (ADS)
Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi
2017-01-01
This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.
An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model
NASA Astrophysics Data System (ADS)
Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.
2017-01-01
Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.
Artifacts in digital coincidence timing
Moses, W. W.; Peng, Q.
2014-10-16
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into amore » time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator.All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e. the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the 'optimal' method. In conclusion, the purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization.« less
Artifacts in digital coincidence timing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moses, W. W.; Peng, Q.
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into amore » time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator.All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e. the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the 'optimal' method. In conclusion, the purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization.« less
Automation of POST Cases via External Optimizer and "Artificial p2" Calculation
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Mathew R.; Michelson, Diane K.
2017-01-01
During conceptual design speed and accuracy are often at odds. Specifically in the realm of launch vehicles, optimizing the ascent trajectory requires a larger pool of analytical power and expertise. Experienced analysts working on familiar vehicles can produce optimal trajectories in a short time frame, however whenever either "experienced" or "familiar " is not applicable the optimization process can become quite lengthy. In order to construct a vehicle agnostic method an established global optimization algorithm is needed. In this work the authors develop an "artificial" error term to map arbitrary control vectors to non-zero error by which a global method can operate. Two global methods are compared alongside Design of Experiments and random sampling and are shown to produce comparable results to analysis done by a human expert.
Zhang, Huaguang; Cui, Lili; Zhang, Xin; Luo, Yanhong
2011-12-01
In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.
NASA Astrophysics Data System (ADS)
Wang, Sicheng; Huang, Sixun; Xiang, Jie; Fang, Hanxian; Feng, Jian; Wang, Yu
2016-12-01
Ionospheric tomography is based on the observed slant total electron content (sTEC) along different satellite-receiver rays to reconstruct the three-dimensional electron density distributions. Due to incomplete measurements provided by the satellite-receiver geometry, it is a typical ill-posed problem, and how to overcome the ill-posedness is still a crucial content of research. In this paper, Tikhonov regularization method is used and the model function approach is applied to determine the optimal regularization parameter. This algorithm not only balances the weights between sTEC observations and background electron density field but also converges globally and rapidly. The background error covariance is given by multiplying background model variance and location-dependent spatial correlation, and the correlation model is developed by using sample statistics from an ensemble of the International Reference Ionosphere 2012 (IRI2012) model outputs. The Global Navigation Satellite System (GNSS) observations in China are used to present the reconstruction results, and measurements from two ionosondes are used to make independent validations. Both the test cases using artificial sTEC observations and actual GNSS sTEC measurements show that the regularization method can effectively improve the background model outputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL
2016-09-15
iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
Parameter estimation of a pulp digester model with derivative-free optimization strategies
NASA Astrophysics Data System (ADS)
Seiça, João C.; Romanenko, Andrey; Fernandes, Florbela P.; Santos, Lino O.; Fernandes, Natércia C. P.
2017-07-01
The work concerns the parameter estimation in the context of the mechanistic modelling of a pulp digester. The problem is cast as a box bounded nonlinear global optimization problem in order to minimize the mismatch between the model outputs with the experimental data observed at a real pulp and paper plant. MCSFilter and Simulated Annealing global optimization methods were used to solve the optimization problem. While the former took longer to converge to the global minimum, the latter terminated faster at a significantly higher value of the objective function and, thus, failed to find the global solution.
NASA Technical Reports Server (NTRS)
Martos, Borja; Kiszely, Paul; Foster, John V.
2011-01-01
As part of the NASA Aviation Safety Program (AvSP), a novel pitot-static calibration method was developed to allow rapid in-flight calibration for subscale aircraft while flying within confined test areas. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. This method has been demonstrated in subscale flight tests and has shown small 2- error bounds with significant reduction in test time compared to other methods. The current research was motivated by the desire to further evaluate and develop this method for full-scale aircraft. A goal of this research was to develop an accurate calibration method that enables reductions in test equipment and flight time, thus reducing costs. The approach involved analysis of data acquisition requirements, development of efficient flight patterns, and analysis of pressure error models based on system identification methods. Flight tests were conducted at The University of Tennessee Space Institute (UTSI) utilizing an instrumented Piper Navajo research aircraft. In addition, the UTSI engineering flight simulator was used to investigate test maneuver requirements and handling qualities issues associated with this technique. This paper provides a summary of piloted simulation and flight test results that illustrates the performance and capabilities of the NASA calibration method. Discussion of maneuver requirements and data analysis methods is included as well as recommendations for piloting technique.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2016-10-14
A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.
The Sizing and Optimization Language, (SOL): Computer language for design problems
NASA Technical Reports Server (NTRS)
Lucas, Stephen H.; Scotti, Stephen J.
1988-01-01
The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.
Computations of Aerodynamic Performance Databases Using Output-Based Refinement
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2009-01-01
Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.
An optimization-based framework for anisotropic simplex mesh adaptation
NASA Astrophysics Data System (ADS)
Yano, Masayuki; Darmofal, David L.
2012-09-01
We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.
A global design of high power Nd 3+-Yb 3+ co-doped fiber lasers
NASA Astrophysics Data System (ADS)
Fan, Zhang; Chuncan, Wang; Tigang, Ning
2008-09-01
A global optimization method - niche hybrid genetic algorithm (NHGA) based on fitness sharing and elite replacement is applied to optimize Nd3+-Yb3+ co-doped fiber lasers (NYDFLs) for obtaining maximum signal output power. With a objective function and different pumping powers, five critical parameters (the fiber length, L; the proportion of pump power for pumping Nd3+, η; Nd3+ and Yb3+ concentrations, NNd and NYb and output mirror reflectivity, Rout) of the given NYDFLs are optimized by solving the rate and power propagation equations. Results show that dividing equally the input pump power among 808 nm (Nd3+) and 940 nm (Yb3+) is not an optimal choice and the pump power of Nd3+ ions should be kept around 10-13.78% of the total pump power. Three optimal schemes are obtained by NHGA and the highest slope efficiency of the laser is able to reach 80.1%.
Artifacts in Digital Coincidence Timing
Moses, W. W.; Peng, Q.
2014-01-01
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into a time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator. All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e., the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the “optimal” method. The purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization. PMID:25321885
Ring rolling process simulation for geometry optimization
NASA Astrophysics Data System (ADS)
Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio
2017-10-01
Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.
Multiple-copy state discrimination: Thinking globally, acting locally
NASA Astrophysics Data System (ADS)
Higgins, B. L.; Doherty, A. C.; Bartlett, S. D.; Pryde, G. J.; Wiseman, H. M.
2011-05-01
We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N→∞. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements, and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.
Multiple-copy state discrimination: Thinking globally, acting locally
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higgins, B. L.; Pryde, G. J.; Wiseman, H. M.
2011-05-15
We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N{yields}{infinity}. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements,more » and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.« less
Ménard, Richard; Deshaies-Jacques, Martin; Gasset, Nicolas
2016-09-01
An objective analysis is one of the main components of data assimilation. By combining observations with the output of a predictive model we combine the best features of each source of information: the complete spatial and temporal coverage provided by models, with a close representation of the truth provided by observations. The process of combining observations with a model output is called an analysis. To produce an analysis requires the knowledge of observation and model errors, as well as its spatial correlation. This paper is devoted to the development of methods of estimation of these error variances and the characteristic length-scale of the model error correlation for its operational use in the Canadian objective analysis system. We first argue in favor of using compact support correlation functions, and then introduce three estimation methods: the Hollingsworth-Lönnberg (HL) method in local and global form, the maximum likelihood method (ML), and the [Formula: see text] diagnostic method. We perform one-dimensional (1D) simulation studies where the error variance and true correlation length are known, and perform an estimation of both error variances and correlation length where both are non-uniform. We show that a local version of the HL method can capture accurately the error variances and correlation length at each observation site, provided that spatial variability is not too strong. However, the operational objective analysis requires only a single and globally valid correlation length. We examine whether any statistics of the local HL correlation lengths could be a useful estimate, or whether other global estimation methods such as by the global HL, ML, or [Formula: see text] should be used. We found in both 1D simulation and using real data that the ML method is able to capture physically significant aspects of the correlation length, while most other estimates give unphysical and larger length-scale values. This paper describes a proposed improvement of the objective analysis of surface pollutants at Environment and Climate Change Canada (formerly known as Environment Canada). Objective analyses are essentially surface maps of air pollutants that are obtained by combining observations with an air quality model output, and are thought to provide a complete and more accurate representation of the air quality. The highlight of this study is an analysis of methods to estimate the model (or background) error correlation length-scale. The error statistics are an important and critical component to the analysis scheme.
Robust ADP Design for Continuous-Time Nonlinear Systems With Output Constraints.
Fan, Bo; Yang, Qinmin; Tang, Xiaoyu; Sun, Youxian
2018-06-01
In this paper, a novel robust adaptive dynamic programming (RADP)-based control strategy is presented for the optimal control of a class of output-constrained continuous-time unknown nonlinear systems. Our contribution includes a step forward beyond the usual optimal control result to show that the output of the plant is always within user-defined bounds. To achieve the new results, an error transformation technique is first established to generate an equivalent nonlinear system, whose asymptotic stability guarantees both the asymptotic stability and the satisfaction of the output restriction of the original system. Furthermore, RADP algorithms are developed to solve the transformed nonlinear optimal control problem with completely unknown dynamics as well as a robust design to guarantee the stability of the closed-loop systems in the presence of unavailable internal dynamic state. Via small-gain theorem, asymptotic stability of the original and transformed nonlinear system is theoretically guaranteed. Finally, comparison results demonstrate the merits of the proposed control policy.
Applied Computational Electromagnetics Society Journal, Volume 9, Number 2
1994-07-01
input/output standardization; code or technique optimization and error minimization; innovations in solution technique or in data input/output...THE APPLIED COMPUTATIONAL ELECTROMAGNETICS SOCIETY JOURNAL EDITORS 3DITOR-IN-CH•IF/ACES EDITOR-IN-CHIEP/JOURNAL MANAGING EDITOR W. Perry Wheless...Adalbert Konrad and Paul P. Biringer Department of Electrical and Computer Engineering, University of Toronto Toronto, Ontario, CANADA M5S 1A4 Ailiwir
Aerodynamic coefficient identification package dynamic data accuracy determinations: Lessons learned
NASA Technical Reports Server (NTRS)
Heck, M. L.; Findlay, J. T.; Compton, H. R.
1983-01-01
The errors in the dynamic data output from the Aerodynamic Coefficient Identification Packages (ACIP) flown on Shuttle flights 1, 3, 4, and 5 were determined using the output from the Inertial Measurement Units (IMU). A weighted least-squares batch algorithm was empolyed. Using an averaging technique, signal detection was enhanced; this allowed improved calibration solutions. Global errors as large as 0.04 deg/sec for the ACIP gyros, 30 mg for linear accelerometers, and 0.5 deg/sec squared in the angular accelerometer channels were detected and removed with a combination is bias, scale factor, misalignment, and g-sensitive calibration constants. No attempt was made to minimize local ACIP dynamic data deviations representing sensed high-frequency vibration or instrument noise. Resulting 1sigma calibrated ACIP global accuracies were within 0.003 eg/sec, 1.0 mg, and 0.05 deg/sec squared for the gyros, linear accelerometers, and angular accelerometers, respectively.
Some Insights of Spectral Optimization in Ocean Color Inversion
NASA Technical Reports Server (NTRS)
Lee, Zhongping; Franz, Bryan; Shang, Shaoling; Dong, Qiang; Arnone, Robert
2011-01-01
In the past decades various algorithms have been developed for the retrieval of water constituents from the measurement of ocean color radiometry, and one of the approaches is spectral optimization. This approach defines an error target (or error function) between the input remote sensing reflectance and the output remote sensing reflectance, with the latter modeled with a few variables that represent the optically active properties (such as the absorption coefficient of phytoplankton and the backscattering coefficient of particles). The values of the variables when the error reach a minimum (optimization is achieved) are considered the properties that form the input remote sensing reflectance; or in other words, the equations are solved numerically. The applications of this approach implicitly assume that the error is a monotonic function of the various variables. Here, with data from numerical simulation and field measurements, we show the shape of the error surface, in a way to justify the possibility of finding a solution of the various variables. In addition, because the spectral properties could be modeled differently, impacts of such differences on the error surface as well as on the retrievals are also presented.
Adjoint-Based Mesh Adaptation for the Sonic Boom Signature Loudness
NASA Technical Reports Server (NTRS)
Rallabhandi, Sriram K.; Park, Michael A.
2017-01-01
The mesh adaptation functionality of FUN3D is utilized to obtain a mesh optimized to calculate sonic boom ground signature loudness. During this process, the coupling between the discrete-adjoints of the computational fluid dynamics tool FUN3D and the atmospheric propagation tool sBOOM is exploited to form the error estimate. This new mesh adaptation methodology will allow generation of suitable meshes adapted to reduce the estimated errors in the ground loudness, which is an optimization metric employed in supersonic aircraft design. This new output-based adaptation could allow new insights into meshing for sonic boom analysis and design, and complements existing output-based adaptation techniques such as adaptation to reduce estimated errors in off-body pressure functional. This effort could also have implications for other coupled multidisciplinary adjoint capabilities (e.g., aeroelasticity) as well as inclusion of propagation specific parameters such as prevailing winds or non-standard atmospheric conditions. Results are discussed in the context of existing methods and appropriate conclusions are drawn as to the efficacy and efficiency of the developed capability.
Climate model biases in seasonality of continental water storage revealed by satellite gravimetry
Swenson, Sean; Milly, P.C.D.
2006-01-01
Satellite gravimetric observations of monthly changes in continental water storage are compared with outputs from five climate models. All models qualitatively reproduce the global pattern of annual storage amplitude, and the seasonal cycle of global average storage is reproduced well, consistent with earlier studies. However, global average agreements mask systematic model biases in low latitudes. Seasonal extrema of low‐latitude, hemispheric storage generally occur too early in the models, and model‐specific errors in amplitude of the low‐latitude annual variations are substantial. These errors are potentially explicable in terms of neglected or suboptimally parameterized water stores in the land models and precipitation biases in the climate models.
Texas two-step: a framework for optimal multi-input single-output deconvolution.
Neelamani, Ramesh; Deffenbaugh, Max; Baraniuk, Richard G
2007-11-01
Multi-input single-output deconvolution (MISO-D) aims to extract a deblurred estimate of a target signal from several blurred and noisy observations. This paper develops a new two step framework--Texas Two-Step--to solve MISO-D problems with known blurs. Texas Two-Step first reduces the MISO-D problem to a related single-input single-output deconvolution (SISO-D) problem by invoking the concept of sufficient statistics (SSs) and then solves the simpler SISO-D problem using an appropriate technique. The two-step framework enables new MISO-D techniques (both optimal and suboptimal) based on the rich suite of existing SISO-D techniques. In fact, the properties of SSs imply that a MISO-D algorithm is mean-squared-error optimal if and only if it can be rearranged to conform to the Texas Two-Step framework. Using this insight, we construct new wavelet- and curvelet-based MISO-D algorithms with asymptotically optimal performance. Simulated and real data experiments verify that the framework is indeed effective.
Preliminary study of injection transients in the TPS storage ring
NASA Astrophysics Data System (ADS)
Chen, C. H.; Liu, Y. C.; Y Chen, J.; Chiu, M. S.; Tseng, F. H.; Fann, S.; Liang, C. C.; Huang, C. S.; Y Lee, T.; Y Chen, B.; Tsai, H. J.; Luo, G. H.; Kuo, C. C.
2017-07-01
An optimized injection efficiency is related to a perfect match between the pulsed magnetic fields in the storage ring and transfer line extraction in the TPS. However, misalignment errors, hardware output errors and leakage fields are unavoidable. We study the influence of injection transients on the stored TPS beam and discuss solutions to compensate these. Related simulations and measurements will be presented.
Zeng, Xiaozheng; McGough, Robert J.
2009-01-01
The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640
Optimizer convergence and local minima errors and their clinical importance
NASA Astrophysics Data System (ADS)
Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.
2003-09-01
Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.
Optimizer convergence and local minima errors and their clinical importance.
Jeraj, Robert; Wu, Chuan; Mackie, Thomas R
2003-09-07
Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.
The Errors Sources Affect to the Results of One-Way Nested Ocean Regional Circulation Model
NASA Astrophysics Data System (ADS)
Pham, S. V.
2016-02-01
Pham-Van Sy1, Jin Hwan Hwang2 and Hyeyun Ku3 Dept. of Civil & Environmental Engineering, Seoul National University, KoreaEmail: 1phamsymt@gmail.com (Corresponding author) Email: 2jinhwang@snu.ac.krEmail: 3hyeyun.ku@gmail.comAbstractThe Oceanic Regional Circulation Model (ORCM) is an essential tool in resolving highly a regional scale through downscaling dynamically the results from the roughly revolved global model. However, when dynamic downscaling from a coarse resolution of the global model or observations to the small scale, errors are generated due to the different sizes of resolution and lateral updating frequency. This research evaluated the effect of four main sources on the results of the ocean regional circulation model (ORCMs) during downscaling and nesting the output data from the ocean global circulation model (OGCMs). Representative four error sources should be the way of the LBC formulation, the spatial resolution difference between driving and driven data, the frequency for up-dating LBCs and domain size. Errors which are contributed from each error source to the results of the ORCMs are investigated separately by applying the Big-Brother Experiment (BBE). Within resolution of 3km grid point of the ORCMs imposing in the BBE framework, it clearly exposes that the simulation results of the ORCMs significantly depend on the domain size and specially the spatial and temporal resolution of lateral boundary conditions (LBCs). The ratio resolution of spatial resolution between driving data and driven model could be up to 3, the updating frequency of the LBCs can be up to every 6 hours per day. The optimal domain size of the ORCMs could be smaller than the OGCMs' domain size around 2 to 10 times. Key words: ORCMs, error source, lateral boundary conditions, domain size Acknowledgement: This research was supported by grants from the Korean Ministry of Oceans and Fisheries entitled as "Developing total management system for the Keum river estuary and coast" and "Development of Technology for CO2 Marine Geological Storage". We also thank to the administrative supports of the Integrated Research Institute of Construction and Environmental Engineering of the Seoul National University.
Global Surface Temperature Change and Uncertainties Since 1861
NASA Technical Reports Server (NTRS)
Shen, Samuel S. P.; Lau, William K. M. (Technical Monitor)
2002-01-01
The objective of this talk is to analyze the warming trend and its uncertainties of the global and hemi-spheric surface temperatures. By the method of statistical optimal averaging scheme, the land surface air temperature and sea surface temperature observational data are used to compute the spatial average annual mean surface air temperature. The optimal averaging method is derived from the minimization of the mean square error between the true and estimated averages and uses the empirical orthogonal functions. The method can accurately estimate the errors of the spatial average due to observational gaps and random measurement errors. In addition, quantified are three independent uncertainty factors: urbanization, change of the in situ observational practices and sea surface temperature data corrections. Based on these uncertainties, the best linear fit to annual global surface temperature gives an increase of 0.61 +/- 0.16 C between 1861 and 2000. This lecture will also touch the topics on the impact of global change on nature and environment. as well as the latest assessment methods for the attributions of global change.
A GPS-Based Pitot-Static Calibration Method Using Global Output-Error Optimization
NASA Technical Reports Server (NTRS)
Foster, John V.; Cunningham, Kevin
2010-01-01
Pressure-based airspeed and altitude measurements for aircraft typically require calibration of the installed system to account for pressure sensing errors such as those due to local flow field effects. In some cases, calibration is used to meet requirements such as those specified in Federal Aviation Regulation Part 25. Several methods are used for in-flight pitot-static calibration including tower fly-by, pacer aircraft, and trailing cone methods. In the 1990 s, the introduction of satellite-based positioning systems to the civilian market enabled new inflight calibration methods based on accurate ground speed measurements provided by Global Positioning Systems (GPS). Use of GPS for airspeed calibration has many advantages such as accuracy, ease of portability (e.g. hand-held) and the flexibility of operating in airspace without the limitations of test range boundaries or ground telemetry support. The current research was motivated by the need for a rapid and statistically accurate method for in-flight calibration of pitot-static systems for remotely piloted, dynamically-scaled research aircraft. Current calibration methods were deemed not practical for this application because of confined test range size and limited flight time available for each sortie. A method was developed that uses high data rate measurements of static and total pressure, and GPSbased ground speed measurements to compute the pressure errors over a range of airspeed. The novel application of this approach is the use of system identification methods that rapidly compute optimal pressure error models with defined confidence intervals in nearreal time. This method has been demonstrated in flight tests and has shown 2- bounds of approximately 0.2 kts with an order of magnitude reduction in test time over other methods. As part of this experiment, a unique database of wind measurements was acquired concurrently with the flight experiments, for the purpose of experimental validation of the optimization method. This paper describes the GPS-based pitot-static calibration method developed for the AirSTAR research test-bed operated as part of the Integrated Resilient Aircraft Controls (IRAC) project in the NASA Aviation Safety Program (AvSP). A description of the method will be provided and results from recent flight tests will be shown to illustrate the performance and advantages of this approach. Discussion of maneuver requirements and data reduction will be included as well as potential applications.
Energy Storage Sizing Taking Into Account Forecast Uncertainties and Receding Horizon Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Hug, Gabriela; Li, Xin
Energy storage systems (ESS) have the potential to be very beneficial for applications such as reducing the ramping of generators, peak shaving, and balancing not only the variability introduced by renewable energy sources, but also the uncertainty introduced by errors in their forecasts. Optimal usage of storage may result in reduced generation costs and an increased use of renewable energy. However, optimally sizing these devices is a challenging problem. This paper aims to provide the tools to optimally size an ESS under the assumption that it will be operated under a model predictive control scheme and that the forecast ofmore » the renewable energy resources include prediction errors. A two-stage stochastic model predictive control is formulated and solved, where the optimal usage of the storage is simultaneously determined along with the optimal generation outputs and size of the storage. Wind forecast errors are taken into account in the optimization problem via probabilistic constraints for which an analytical form is derived. This allows for the stochastic optimization problem to be solved directly, without using sampling-based approaches, and sizing the storage to account not only for a wide range of potential scenarios, but also for a wide range of potential forecast errors. In the proposed formulation, we account for the fact that errors in the forecast affect how the device is operated later in the horizon and that a receding horizon scheme is used in operation to optimally use the available storage.« less
Scope of Gradient and Genetic Algorithms in Multivariable Function Optimization
NASA Technical Reports Server (NTRS)
Shaykhian, Gholam Ali; Sen, S. K.
2007-01-01
Global optimization of a multivariable function - constrained by bounds specified on each variable and also unconstrained - is an important problem with several real world applications. Deterministic methods such as the gradient algorithms as well as the randomized methods such as the genetic algorithms may be employed to solve these problems. In fact, there are optimization problems where a genetic algorithm/an evolutionary approach is preferable at least from the quality (accuracy) of the results point of view. From cost (complexity) point of view, both gradient and genetic approaches are usually polynomial-time; there are no serious differences in this regard, i.e., the computational complexity point of view. However, for certain types of problems, such as those with unacceptably erroneous numerical partial derivatives and those with physically amplified analytical partial derivatives whose numerical evaluation involves undesirable errors and/or is messy, a genetic (stochastic) approach should be a better choice. We have presented here the pros and cons of both the approaches so that the concerned reader/user can decide which approach is most suited for the problem at hand. Also for the function which is known in a tabular form, instead of an analytical form, as is often the case in an experimental environment, we attempt to provide an insight into the approaches focusing our attention toward accuracy. Such an insight will help one to decide which method, out of several available methods, should be employed to obtain the best (least error) output. *
Novel Hybrid of LS-SVM and Kalman Filter for GPS/INS Integration
NASA Astrophysics Data System (ADS)
Xu, Zhenkai; Li, Yong; Rizos, Chris; Xu, Xiaosu
Integration of Global Positioning System (GPS) and Inertial Navigation System (INS) technologies can overcome the drawbacks of the individual systems. One of the advantages is that the integrated solution can provide continuous navigation capability even during GPS outages. However, bridging the GPS outages is still a challenge when Micro-Electro-Mechanical System (MEMS) inertial sensors are used. Methods being currently explored by the research community include applying vehicle motion constraints, optimal smoother, and artificial intelligence (AI) techniques. In the research area of AI, the neural network (NN) approach has been extensively utilised up to the present. In an NN-based integrated system, a Kalman filter (KF) estimates position, velocity and attitude errors, as well as the inertial sensor errors, to output navigation solutions while GPS signals are available. At the same time, an NN is trained to map the vehicle dynamics with corresponding KF states, and to correct INS measurements when GPS measurements are unavailable. To achieve good performance it is critical to select suitable quality and an optimal number of samples for the NN. This is sometimes too rigorous a requirement which limits real world application of NN-based methods.The support vector machine (SVM) approach is based on the structural risk minimisation principle, instead of the minimised empirical error principle that is commonly implemented in an NN. The SVM can avoid local minimisation and over-fitting problems in an NN, and therefore potentially can achieve a higher level of global performance. This paper focuses on the least squares support vector machine (LS-SVM), which can solve highly nonlinear and noisy black-box modelling problems. This paper explores the application of the LS-SVM to aid the GPS/INS integrated system, especially during GPS outages. The paper describes the principles of the LS-SVM and of the KF hybrid method, and introduces the LS-SVM regression algorithm. Field test data is processed to evaluate the performance of the proposed approach.
Xian, Zhiwen; Hu, Xiaoping; Lian, Junxiang; Zhang, Lilian; Cao, Juliang; Wang, Yujie; Ma, Tao
2014-09-15
Navigation plays a vital role in our daily life. As traditional and commonly used navigation technologies, Inertial Navigation System (INS) and Global Navigation Satellite System (GNSS) can provide accurate location information, but suffer from the accumulative error of inertial sensors and cannot be used in a satellite denied environment. The remarkable navigation ability of animals shows that the pattern of the polarization sky can be used for navigation. A bio-inspired POLarization Navigation Sensor (POLNS) is constructed to detect the polarization of skylight. Contrary to the previous approach, we utilize all the outputs of POLNS to compute input polarization angle, based on Least Squares, which provides optimal angle estimation. In addition, a new sensor calibration algorithm is presented, in which the installation angle errors and sensor biases are taken into consideration. Derivation and implementation of our calibration algorithm are discussed in detail. To evaluate the performance of our algorithms, simulation and real data test are done to compare our algorithms with several exiting algorithms. Comparison results indicate that our algorithms are superior to the others and are more feasible and effective in practice.
Research on particle swarm optimization algorithm based on optimal movement probability
NASA Astrophysics Data System (ADS)
Ma, Jianhong; Zhang, Han; He, Baofeng
2017-01-01
The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan
2018-01-01
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan
2018-02-06
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.
New nonlinear control algorithms for multiple robot arms
NASA Technical Reports Server (NTRS)
Tarn, T. J.; Bejczy, A. K.; Yun, X.
1988-01-01
Multiple coordinated robot arms are modeled by considering the arms as closed kinematic chains and as a force-constrained mechanical system working on the same object simultaneously. In both formulations, a novel dynamic control method is discussed. It is based on feedback linearization and simultaneous output decoupling technique. By applying a nonlinear feedback and a nonlinear coordinate transformation, the complicated model of the multiple robot arms in either formulation is converted into a linear and output decoupled system. The linear system control theory and optimal control theory are used to design robust controllers in the task space. The first formulation has the advantage of automatically handling the coordination and load distribution among the robot arms. In the second formulation, it was found that by choosing a general output equation it became possible simultaneously to superimpose the position and velocity error feedback with the force-torque error feedback in the task space.
NASA Astrophysics Data System (ADS)
Jiao, Peng; Yang, Er; Ni, Yong Xin
2018-06-01
The overland flow resistance on grassland slope of 20° was studied by using simulated rainfall experiments. Model of overland flow resistance coefficient was established based on BP neural network. The input variations of model were rainfall intensity, flow velocity, water depth, and roughness of slope surface, and the output variations was overland flow resistance coefficient. Model was optimized by Genetic Algorithm. The results show that the model can be used to calculate overland flow resistance coefficient, and has high simulation accuracy. The average prediction error of the optimized model of test set is 8.02%, and the maximum prediction error was 18.34%.
NASA Astrophysics Data System (ADS)
Lund, M. T.; Samset, B. H.; Skeie, R. B.; Berntsen, T.
2017-12-01
Several recent studies have used observations from the HIPPO flight campaigns to constrain the modeled vertical distribution of black carbon (BC) over the Pacific. Results indicate a relatively linear relationship between global-mean atmospheric BC residence time, or lifetime, and bias in current models. A lifetime of less than 5 days is necessary for models to reasonably reproduce these observations. This is shorter than what many global models predict, which will in turn affect their estimates of BC climate impacts. Here we use the chemistry-transport model OsloCTM to examine whether this relationship between global BC lifetime and model skill also holds for a broader a set of flight campaigns from 2009-2013 covering both remote marine and continental regions at a range of latitudes. We perform four sets of simulations with varying scavenging efficiency to obtain a spread in the modeled global BC lifetime and calculate the model error and bias for each campaign and region. Vertical BC profiles are constructed using an online flight simulator, as well by averaging and interpolating monthly mean model output, allowing us to quantify sampling errors arising when measurements are compared with model output at different spatial and temporal resolutions. Using the OsloCTM coupled with a microphysical aerosol parameterization, we investigate the sensitivity of modeled BC vertical distribution to uncertainties in the aerosol aging and scavenging processes in more detail. From this, we can quantify how model uncertainties in the BC life cycle propagate into uncertainties in its climate impacts. For most campaigns and regions, a short global-mean BC lifetime corresponds with the lowest model error and bias. On an aggregated level, sampling errors appear to be small, but larger differences are seen in individual regions. However, we also find that model-measurement discrepancies in BC vertical profiles cannot be uniquely attributed to uncertainties in a single process or parameter, at least in this model. Model development therefore needs to focus on improvements to individual processes, supported by a broad range of observational and experimental data, rather than tuning individual, effective parameters such as global BC lifetime.
Uniscale multi-view registration using double dog-leg method
NASA Astrophysics Data System (ADS)
Chen, Chao-I.; Sargent, Dusty; Tsai, Chang-Ming; Wang, Yuan-Fang; Koppel, Dan
2009-02-01
3D computer models of body anatomy can have many uses in medical research and clinical practices. This paper describes a robust method that uses videos of body anatomy to construct multiple, partial 3D structures and then fuse them to form a larger, more complete computer model using the structure-from-motion framework. We employ the Double Dog-Leg (DDL) method, a trust-region based nonlinear optimization method, to jointly optimize the camera motion parameters (rotation and translation) and determine a global scale that all partial 3D structures should agree upon. These optimized motion parameters are used for constructing local structures, and the global scale is essential for multi-view registration after all these partial structures are built. In order to provide a good initial guess of the camera movement parameters and outlier free 2D point correspondences for DDL, we also propose a two-stage scheme where multi-RANSAC with a normalized eight-point algorithm is first performed and then a few iterations of an over-determined five-point algorithm is used to polish the results. Our experimental results using colonoscopy video show that the proposed scheme always produces more accurate outputs than the standard RANSAC scheme. Furthermore, since we have obtained many reliable point correspondences, time-consuming and error-prone registration methods like the iterative closest points (ICP) based algorithms can be replaced by a simple rigid-body transformation solver when merging partial structures into a larger model.
Digital Troposcatter Performance Model: Software Documentation.
1983-11-28
Instantanous detection SNR. Output arguments: OUTISI R*4 Conditional outage probabilit.-. IERR 1*2 Error flag. Global variables input from commor IRSN /NUNPAR...meaning onlv when ITOFF = 3. - Possiblv given a new value in the following subprograms: TRANSF IRSN /NUMPAR/ 1*2 NUMPAR.INC Number of values in SNR
Modeling and Implementation of Multi-Position Non-Continuous Rotation Gyroscope North Finder.
Luo, Jun; Wang, Zhiqian; Shen, Chengwu; Kuijper, Arjan; Wen, Zhuoman; Liu, Shaojin
2016-09-20
Even when the Global Positioning System (GPS) signal is blocked, a rate gyroscope (gyro) north finder is capable of providing the required azimuth reference information to a certain extent. In order to measure the azimuth between the observer and the north direction very accurately, we propose a multi-position non-continuous rotation gyro north finding scheme. Our new generalized mathematical model analyzes the elements that affect the azimuth measurement precision and can thus provide high precision azimuth reference information. Based on the gyro's principle of detecting a projection of the earth rotation rate on its sensitive axis and the proposed north finding scheme, we are able to deduct an accurate mathematical model of the gyro outputs against azimuth with the gyro and shaft misalignments. Combining the gyro outputs model and the theory of propagation of uncertainty, some approaches to optimize north finding are provided, including reducing the gyro bias error, constraining the gyro random error, increasing the number of rotation points, improving rotation angle measurement precision, decreasing the gyro and the shaft misalignment angles. According them, a north finder setup is built and the azimuth uncertainty of 18" is obtained. This paper provides systematic theory for analyzing the details of the gyro north finder scheme from simulation to implementation. The proposed theory can guide both applied researchers in academia and advanced practitioners in industry for designing high precision robust north finder based on different types of rate gyroscopes.
Predictive and Neural Predictive Control of Uncertain Systems
NASA Technical Reports Server (NTRS)
Kelkar, Atul G.
2000-01-01
Accomplishments and future work are:(1) Stability analysis: the work completed includes characterization of stability of receding horizon-based MPC in the setting of LQ paradigm. The current work-in-progress includes analyzing local as well as global stability of the closed-loop system under various nonlinearities; for example, actuator nonlinearities; sensor nonlinearities, and other plant nonlinearities. Actuator nonlinearities include three major types of nonlineaxities: saturation, dead-zone, and (0, 00) sector. (2) Robustness analysis: It is shown that receding horizon parameters such as input and output horizon lengths have direct effect on the robustness of the system. (3) Code development: A matlab code has been developed which can simulate various MPC formulations. The current effort is to generalize the code to include ability to handle all plant types and all MPC types. (4) Improved predictor: It is shown that MPC design using better predictors that can minimize prediction errors. It is shown analytically and numerically that Smith predictor can provide closed-loop stability under GPC operation for plants with dead times where standard optimal predictor fails. (5) Neural network predictors: When neural network is used as predictor it can be shown that neural network predicts the plant output within some finite error bound under certain conditions. Our preliminary study shows that with proper choice of update laws and network architectures such bound can be obtained. However, much work needs to be done to obtain a similar result in general case.
NASA Technical Reports Server (NTRS)
Stepner, D. E.; Mehra, R. K.
1973-01-01
A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
NASA Technical Reports Server (NTRS)
Lee-Rausch, E. M.; Park, M. A.; Jones, W. T.; Hammond, D. P.; Nielsen, E. J.
2005-01-01
This paper demonstrates the extension of error estimation and adaptation methods to parallel computations enabling larger, more realistic aerospace applications and the quantification of discretization errors for complex 3-D solutions. Results were shown for an inviscid sonic-boom prediction about a double-cone configuration and a wing/body segmented leading edge (SLE) configuration where the output function of the adjoint was pressure integrated over a part of the cylinder in the near field. After multiple cycles of error estimation and surface/field adaptation, a significant improvement in the inviscid solution for the sonic boom signature of the double cone was observed. Although the double-cone adaptation was initiated from a very coarse mesh, the near-field pressure signature from the final adapted mesh compared very well with the wind-tunnel data which illustrates that the adjoint-based error estimation and adaptation process requires no a priori refinement of the mesh. Similarly, the near-field pressure signature for the SLE wing/body sonic boom configuration showed a significant improvement from the initial coarse mesh to the final adapted mesh in comparison with the wind tunnel results. Error estimation and field adaptation results were also presented for the viscous transonic drag prediction of the DLR-F6 wing/body configuration, and results were compared to a series of globally refined meshes. Two of these globally refined meshes were used as a starting point for the error estimation and field-adaptation process where the output function for the adjoint was the total drag. The field-adapted results showed an improvement in the prediction of the drag in comparison with the finest globally refined mesh and a reduction in the estimate of the remaining drag error. The adjoint-based adaptation parameter showed a need for increased resolution in the surface of the wing/body as well as a need for wake resolution downstream of the fuselage and wing trailing edge in order to achieve the requested drag tolerance. Although further adaptation was required to meet the requested tolerance, no further cycles were computed in order to avoid large discrepancies between the surface mesh spacing and the refined field spacing.
Computation of optimal output-feedback compensators for linear time-invariant systems
NASA Technical Reports Server (NTRS)
Platzman, L. K.
1972-01-01
The control of linear time-invariant systems with respect to a quadratic performance criterion was considered, subject to the constraint that the control vector be a constant linear transformation of the output vector. The optimal feedback matrix, f*, was selected to optimize the expected performance, given the covariance of the initial state. It is first shown that the expected performance criterion can be expressed as the ratio of two multinomials in the element of f. This expression provides the basis for a feasible method of determining f* in the case of single-input single-output systems. A number of iterative algorithms are then proposed for the calculation of f* for multiple input-output systems. For two of these, monotone convergence is proved, but they involve the solution of nonlinear matrix equations at each iteration. Another is proposed involving the solution of Lyapunov equations at each iteration, and the gradual increase of the magnitude of a penalty function. Experience with this algorithm will be needed to determine whether or not it does, indeed, possess desirable convergence properties, and whether it can be used to determine the globally optimal f*.
Multi-criterion model ensemble of CMIP5 surface air temperature over China
NASA Astrophysics Data System (ADS)
Yang, Tiantian; Tao, Yumeng; Li, Jingjing; Zhu, Qian; Su, Lu; He, Xiaojia; Zhang, Xiaoming
2018-05-01
The global circulation models (GCMs) are useful tools for simulating climate change, projecting future temperature changes, and therefore, supporting the preparation of national climate adaptation plans. However, different GCMs are not always in agreement with each other over various regions. The reason is that GCMs' configurations, module characteristics, and dynamic forcings vary from one to another. Model ensemble techniques are extensively used to post-process the outputs from GCMs and improve the variability of model outputs. Root-mean-square error (RMSE), correlation coefficient (CC, or R) and uncertainty are commonly used statistics for evaluating the performances of GCMs. However, the simultaneous achievements of all satisfactory statistics cannot be guaranteed in using many model ensemble techniques. In this paper, we propose a multi-model ensemble framework, using a state-of-art evolutionary multi-objective optimization algorithm (termed MOSPD), to evaluate different characteristics of ensemble candidates and to provide comprehensive trade-off information for different model ensemble solutions. A case study of optimizing the surface air temperature (SAT) ensemble solutions over different geographical regions of China is carried out. The data covers from the period of 1900 to 2100, and the projections of SAT are analyzed with regard to three different statistical indices (i.e., RMSE, CC, and uncertainty). Among the derived ensemble solutions, the trade-off information is further analyzed with a robust Pareto front with respect to different statistics. The comparison results over historical period (1900-2005) show that the optimized solutions are superior over that obtained simple model average, as well as any single GCM output. The improvements of statistics are varying for different climatic regions over China. Future projection (2006-2100) with the proposed ensemble method identifies that the largest (smallest) temperature changes will happen in the South Central China (the Inner Mongolia), the North Eastern China (the South Central China), and the North Western China (the South Central China), under RCP 2.6, RCP 4.5, and RCP 8.5 scenarios, respectively.
Study on UKF based federal integrated navigation for high dynamic aviation
NASA Astrophysics Data System (ADS)
Zhao, Gang; Shao, Wei; Chen, Kai; Yan, Jie
2011-08-01
High dynamic aircraft is a very attractive new generation vehicles, in which provides near space aviation with large flight envelope both speed and altitude, for example the hypersonic vehicles. The complex flight environments for high dynamic vehicles require high accuracy and stability navigation scheme. Since the conventional Strapdown Inertial Navigation System (SINS) and Global Position System (GPS) federal integrated scheme based on EKF (Extended Kalman Filter) is invalidation in GPS single blackout situation because of high speed flight, a new high precision and stability integrated navigation approach is presented in this paper, in which the SINS, GPS and Celestial Navigation System (CNS) is combined as a federal information fusion configuration based on nonlinear Unscented Kalman Filter (UKF) algorithm. Firstly, the new integrated system state error is modeled. According to this error model, the SINS system is used as the navigation solution mathematic platform. The SINS combine with GPS constitute one error estimation filter subsystem based on UKF to obtain local optimal estimation, and the SINS combine with CNS constitute another error estimation subsystem. A non-reset federated configuration filter based on partial information is proposed to fuse two local optimal estimations to get global optimal error estimation, and the global optimal estimation is used to correct the SINS navigation solution. The χ 2 fault detection method is used to detect the subsystem fault, and the fault subsystem is isolation through fault interval to protect system away from the divergence. The integrated system takes advantages of SINS, GPS and CNS to an immense improvement for high accuracy and reliably high dynamic navigation application. Simulation result shows that federated fusion of using GPS and CNS to revise SINS solution is reasonable and availably with good estimation performance, which are satisfied with the demands of high dynamic flight navigation. The UKF is superior than EKF based integrated scheme, in which has smaller estimation error and quickly convergence rate.
Carbon monoxide measurement in the global atmospheric sampling program
NASA Technical Reports Server (NTRS)
Dudzinski, T. J.
1979-01-01
The carbon monoxide measurement system used in the NASA Global Atmospheric Sampling Program (GASP) is described. The system used a modified version of a commercially available infrared absorption analyzer. The modifications increased the sensitivity of the analyzer to 1 ppmv full scale, with a limit of detectability of 0.02 ppmv. Packaging was modified for automatic, unattended operation in an aircraft environment. The GASP system is described along with analyzer operation, calibration procedures, and measurement errors. Uncertainty of the CO measurement over a 2-year period ranged from + or - 3 to + or - 13 percent of reading, plus an error due to random fluctuation of the output signal + or - 3 to + or - 15 ppbv.
NASA Astrophysics Data System (ADS)
Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.
2015-08-01
We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.
Tong, Shao Cheng; Li, Yong Ming; Zhang, Hua-Guang
2011-07-01
In this paper, two adaptive neural network (NN) decentralized output feedback control approaches are proposed for a class of uncertain nonlinear large-scale systems with immeasurable states and unknown time delays. Using NNs to approximate the unknown nonlinear functions, an NN state observer is designed to estimate the immeasurable states. By combining the adaptive backstepping technique with decentralized control design principle, an adaptive NN decentralized output feedback control approach is developed. In order to overcome the problem of "explosion of complexity" inherent in the proposed control approach, the dynamic surface control (DSC) technique is introduced into the first adaptive NN decentralized control scheme, and a simplified adaptive NN decentralized output feedback DSC approach is developed. It is proved that the two proposed control approaches can guarantee that all the signals of the closed-loop system are semi-globally uniformly ultimately bounded, and the observer errors and the tracking errors converge to a small neighborhood of the origin. Simulation results are provided to show the effectiveness of the proposed approaches.
Qiao, Jie; Papa, J.; Liu, X.
2015-09-24
Monolithic large-scale diffraction gratings are desired to improve the performance of high-energy laser systems and scale them to higher energy, but the surface deformation of these diffraction gratings induce spatio-temporal coupling that is detrimental to the focusability and compressibility of the output pulse. A new deformable-grating-based pulse compressor architecture with optimized actuator positions has been designed to correct the spatial and temporal aberrations induced by grating wavefront errors. An integrated optical model has been built to analyze the effect of grating wavefront errors on the spatio-temporal performance of a compressor based on four deformable gratings. Moreover, a 1.5-meter deformable gratingmore » has been optimized using an integrated finite-element-analysis and genetic-optimization model, leading to spatio-temporal performance similar to the baseline design with ideal gratings.« less
Compact disk error measurements
NASA Technical Reports Server (NTRS)
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Multivariable frequency domain identification via 2-norm minimization
NASA Technical Reports Server (NTRS)
Bayard, David S.
1992-01-01
The author develops a computational approach to multivariable frequency domain identification, based on 2-norm minimization. In particular, a Gauss-Newton (GN) iteration is developed to minimize the 2-norm of the error between frequency domain data and a matrix fraction transfer function estimate. To improve the global performance of the optimization algorithm, the GN iteration is initialized using the solution to a particular sequentially reweighted least squares problem, denoted as the SK iteration. The least squares problems which arise from both the SK and GN iterations are shown to involve sparse matrices with identical block structure. A sparse matrix QR factorization method is developed to exploit the special block structure, and to efficiently compute the least squares solution. A numerical example involving the identification of a multiple-input multiple-output (MIMO) plant having 286 unknown parameters is given to illustrate the effectiveness of the algorithm.
A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2009-01-01
A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.
Using Fisher Information Criteria for Chemical Sensor Selection via Convex Optimization Methods
2016-11-16
determinant of the inverse Fisher information matrix which is proportional to the global error volume. If a practitioner has a suitable...pro- ceeds from the determinant of the inverse Fisher information matrix which is proportional to the global error volume. If a practitioner has a...design of statistical estimators (i.e. sensors) as their respective inverses act as lower bounds to the (co)variances of the subject estimator, a property
NASA Technical Reports Server (NTRS)
Miller, Robert H. (Inventor); Ribbens, William B. (Inventor)
2003-01-01
A method and system for detecting a failure or performance degradation in a dynamic system having sensors for measuring state variables and providing corresponding output signals in response to one or more system input signals are provided. The method includes calculating estimated gains of a filter and selecting an appropriate linear model for processing the output signals based on the input signals. The step of calculating utilizes one or more models of the dynamic system to obtain estimated signals. The method further includes calculating output error residuals based on the output signals and the estimated signals. The method also includes detecting one or more hypothesized failures or performance degradations of a component or subsystem of the dynamic system based on the error residuals. The step of calculating the estimated values is performed optimally with respect to one or more of: noise, uncertainty of parameters of the models and un-modeled dynamics of the dynamic system which may be a flight vehicle or financial market or modeled financial system.
Choosing Sensor Configuration for a Flexible Structure Using Full Control Synthesis
NASA Technical Reports Server (NTRS)
Lind, Rick; Nalbantoglu, Volkan; Balas, Gary
1997-01-01
Optimal locations and types for feedback sensors which meet design constraints and control requirements are difficult to determine. This paper introduces an approach to choosing a sensor configuration based on Full Control synthesis. A globally optimal Full Control compensator is computed for each member of a set of sensor configurations which are feasible for the plant. The sensor configuration associated with the Full Control system achieving the best closed-loop performance is chosen for feedback measurements to an output feedback controller. A flexible structure is used as an example to demonstrate this procedure. Experimental results show sensor configurations chosen to optimize the Full Control performance are effective for output feedback controllers.
Adaptive control system for pulsed megawatt klystrons
Bolie, Victor W.
1992-01-01
The invention provides an arrangement for reducing waveform errors such as errors in phase or amplitude in output pulses produced by pulsed power output devices such as klystrons by generating an error voltage representing the extent of error still present in the trailing edge of the previous output pulse, using the error voltage to provide a stored control voltage, and applying the stored control voltage to the pulsed power output device to limit the extent of error in the leading edge of the next output pulse.
A channel estimation scheme for MIMO-OFDM systems
NASA Astrophysics Data System (ADS)
He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen
2017-08-01
In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.
An Elimination Method of Temperature-Induced Linear Birefringence in a Stray Current Sensor
Xu, Shaoyi; Li, Wei; Xing, Fangfang; Wang, Yuqiao; Wang, Ruilin; Wang, Xianghui
2017-01-01
In this work, an elimination method of the temperature-induced linear birefringence (TILB) in a stray current sensor is proposed using the cylindrical spiral fiber (CSF), which produces a large amount of circular birefringence to eliminate the TILB based on geometric rotation effect. First, the differential equations that indicate the polarization evolution of the CSF element are derived, and the output error model is built based on the Jones matrix calculus. Then, an accurate search method is proposed to obtain the key parameters of the CSF, including the length of the cylindrical silica rod and the number of the curve spirals. The optimized results are 302 mm and 11, respectively. Moreover, an effective factor is proposed to analyze the elimination of the TILB, which should be greater than 7.42 to achieve the output error requirement that is not greater than 0.5%. Finally, temperature experiments are conducted to verify the feasibility of the elimination method. The results indicate that the output error caused by the TILB can be controlled less than 0.43% based on this elimination method within the range from −20 °C to 40 °C. PMID:28282953
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
Digital transceiver design for two-way AF-MIMO relay systems with imperfect CSI
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Chou, Yu-Fei; Chen, Kui-He
2013-09-01
In the paper, combined optimization of the terminal precoders/equalizers and single-relay precoder is proposed for an amplify-and-forward (AF) multiple-input multiple-output (MIMO) two-way single-relay system with correlated channel uncertainties. Both terminal transceivers and relay precoding matrix are designed based on the minimum mean square error (MMSE) criterion when terminals are unable to erase completely self-interference due to imperfect correlated channel state information (CSI). This robust joint optimization problem of beamforming and precoding matrices under power constraints belongs to neither concave nor convex so that a nonlinear matrix-form conjugate gradient (MCG) algorithm is applied to explore local optimal solutions. Simulation results show that the robust transceiver design is able to overcome effectively the loss of bit-error-rate (BER) due to inclusion of correlated channel uncertainties and residual self-interference.
NASA Technical Reports Server (NTRS)
Didlake, Anthony C., Jr.; Heymsfield, Gerald M.; Tian, Lin; Guimond, Stephen R.
2015-01-01
The coplane analysis technique for mapping the three-dimensional wind field of precipitating systems is applied to the NASA High Altitude Wind and Rain Airborne Profiler (HIWRAP). HIWRAP is a dual-frequency Doppler radar system with two downward pointing and conically scanning beams. The coplane technique interpolates radar measurements to a natural coordinate frame, directly solves for two wind components, and integrates the mass continuity equation to retrieve the unobserved third wind component. This technique is tested using a model simulation of a hurricane and compared to a global optimization retrieval. The coplane method produced lower errors for the cross-track and vertical wind components, while the global optimization method produced lower errors for the along-track wind component. Cross-track and vertical wind errors were dependent upon the accuracy of the estimated boundary condition winds near the surface and at nadir, which were derived by making certain assumptions about the vertical velocity field. The coplane technique was then applied successfully to HIWRAP observations of Hurricane Ingrid (2013). Unlike the global optimization method, the coplane analysis allows for a transparent connection between the radar observations and specific analysis results. With this ability, small-scale features can be analyzed more adequately and erroneous radar measurements can be identified more easily.
NASA Astrophysics Data System (ADS)
Toker, C.; Gokdag, Y. E.; Arikan, F.; Arikan, O.
2012-04-01
Ionosphere is a very important part of Space Weather. Modeling and monitoring of ionospheric variability is a major part of satellite communication, navigation and positioning systems. Total Electron Content (TEC), which is defined as the line integral of the electron density along a ray path, is one of the parameters to investigate the ionospheric variability. Dual-frequency GPS receivers, with their world wide availability and efficiency in TEC estimation, have become a major source of global and regional TEC modeling. When Global Ionospheric Maps (GIM) of International GPS Service (IGS) centers (http://iono.jpl.nasa.gov/gim.html) are investigated, it can be observed that regional ionosphere along the midlatitude regions can be modeled as a constant, linear or a quadratic surface. Globally, especially around the magnetic equator, the TEC surfaces resemble twisted and dispersed single centered or double centered Gaussian functions. Particle Swarm Optimization (PSO) proved itself as a fast converging and an effective optimization tool in various diverse fields. Yet, in order to apply this optimization technique into TEC modeling, the method has to be modified for higher efficiency and accuracy in extraction of geophysical parameters such as model parameters of TEC surfaces. In this study, a modified PSO (mPSO) method is applied to regional and global synthetic TEC surfaces. The synthetic surfaces that represent the trend and small scale variability of various ionospheric states are necessary to compare the performance of mPSO over number of iterations, accuracy in parameter estimation and overall surface reconstruction. The Cramer-Rao bounds for each surface type and model are also investigated and performance of mPSO are tested with respect to these bounds. For global models, the sample points that are used in optimization are obtained using IGS receiver network. For regional TEC models, regional networks such as Turkish National Permanent GPS Network (TNPGN-Active) receiver sites are used. The regional TEC models are grouped into constant (one parameter), linear (two parameters), and quadratic (six parameters) surfaces which are functions of latitude and longitude. Global models require seven parameters for single centered Gaussian and 13 parameters for double centered Gaussian function. The error criterion is the normalized percentage error for both the surface and the parameters. It is observed that mPSO is very successful in parameter extraction of various regional and global models. The normalized reconstruction error varies from 10-4 for constant surfaces to 10-3 for quadratic surfaces in regional models, sampled with regional networks. Even for the cases of a severe geomagnetic storm that affects measurements globally, with IGS network, the reconstruction error is on the order of 10-1 even though individual parameters have higher normalized errors. The modified PSO technique proved itself to be a useful tool for parameter extraction of more complicated TEC models. This study is supported by TUBITAK EEEAG under Grant No: 109E055.
Xian, Zhiwen; Hu, Xiaoping; Lian, Junxiang; Zhang, Lilian; Cao, Juliang; Wang, Yujie; Ma, Tao
2014-01-01
Navigation plays a vital role in our daily life. As traditional and commonly used navigation technologies, Inertial Navigation System (INS) and Global Navigation Satellite System (GNSS) can provide accurate location information, but suffer from the accumulative error of inertial sensors and cannot be used in a satellite denied environment. The remarkable navigation ability of animals shows that the pattern of the polarization sky can be used for navigation. A bio-inspired POLarization Navigation Sensor (POLNS) is constructed to detect the polarization of skylight. Contrary to the previous approach, we utilize all the outputs of POLNS to compute input polarization angle, based on Least Squares, which provides optimal angle estimation. In addition, a new sensor calibration algorithm is presented, in which the installation angle errors and sensor biases are taken into consideration. Derivation and implementation of our calibration algorithm are discussed in detail. To evaluate the performance of our algorithms, simulation and real data test are done to compare our algorithms with several exiting algorithms. Comparison results indicate that our algorithms are superior to the others and are more feasible and effective in practice. PMID:25225872
Practical synchronization on complex dynamical networks via optimal pinning control
NASA Astrophysics Data System (ADS)
Li, Kezan; Sun, Weigang; Small, Michael; Fu, Xinchu
2015-07-01
We consider practical synchronization on complex dynamical networks under linear feedback control designed by optimal control theory. The control goal is to minimize global synchronization error and control strength over a given finite time interval, and synchronization error at terminal time. By utilizing the Pontryagin's minimum principle, and based on a general complex dynamical network, we obtain an optimal system to achieve the control goal. The result is verified by performing some numerical simulations on Star networks, Watts-Strogatz networks, and Barabási-Albert networks. Moreover, by combining optimal control and traditional pinning control, we propose an optimal pinning control strategy which depends on the network's topological structure. Obtained results show that optimal pinning control is very effective for synchronization control in real applications.
Global optimization of multimode interference structure for ratiometric wavelength measurement
NASA Astrophysics Data System (ADS)
Wang, Qian; Farrell, Gerald; Hatta, Agus Muhamad
2007-07-01
The multimode interference structure is conventionally used as a splitter/combiner. In this paper, it is optimised as an edge filter for ratiometric wavelength measurement, which can be used in demodulation of fiber Bragg grating sensing. The global optimization algorithm-adaptive simulated annealing is introduced in the design of multimode interference structure including the length and width of the multimode waveguide section, and positions of the input and output waveguides. The designed structure shows a suitable spectral response for wavelength measurement and a good fabrication tolerance.
NASA Astrophysics Data System (ADS)
Nugraha, A. T.; Agustinah, T.
2018-01-01
Quadcopter an unstable system, underactuated and nonlinear in quadcopter control research developments become an important focus of attention. In this study, following the path control method for position on the X and Y axis, used structure-Generator Tracker Command (CGT) is tested. Attitude control and position feedback quadcopter is compared using the optimal output. The addition of the H∞ performance optimal output feedback control is used to maintain the stability and robustness of quadcopter. Iterative numerical techniques Linear Matrix Inequality (LMI) is used to find the gain controller. The following path control problems is solved using the method of LQ regulators with output feedback. Simulations show that the control system can follow the paths that have been defined in the form of a reference signal square shape. The result of the simulation suggest that the method which used can bring the yaw angle at the expected value algorithm. Quadcopter can do automatically following path with cross track error mean X=0.5 m and Y=0.2 m.
NASA Astrophysics Data System (ADS)
Mansor, Zakwan; Zakaria, Mohd Zakimi; Nor, Azuwir Mohd; Saad, Mohd Sazli; Ahmad, Robiah; Jamaluddin, Hishamuddin
2017-09-01
This paper presents the black-box modelling of palm oil biodiesel engine (POB) using multi-objective optimization differential evolution (MOODE) algorithm. Two objective functions are considered in the algorithm for optimization; minimizing the number of term of a model structure and minimizing the mean square error between actual and predicted outputs. The mathematical model used in this study to represent the POB system is nonlinear auto-regressive moving average with exogenous input (NARMAX) model. Finally, model validity tests are applied in order to validate the possible models that was obtained from MOODE algorithm and lead to select an optimal model.
Economic optimization of operations for hybrid energy systems under variable markets
Chen, Jen; Garcia, Humberto E.
2016-05-21
We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less
Economic optimization of operations for hybrid energy systems under variable markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jen; Garcia, Humberto E.
We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less
Perceptual Color Characterization of Cameras
Vazquez-Corral, Javier; Connah, David; Bertalmío, Marcelo
2014-01-01
Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as XY Z, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a 3 × 3 matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson et al., to perform a perceptual color characterization. In particular, we search for the 3 × 3 matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE ΔE error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3% for the ΔE error, 7% for the S-CIELAB error and 13% for the CID error measures. PMID:25490586
Radial basis function network learns ceramic processing and predicts related strength and density
NASA Technical Reports Server (NTRS)
Cios, Krzysztof J.; Baaklini, George Y.; Vary, Alex; Tjia, Robert E.
1993-01-01
Radial basis function (RBF) neural networks were trained using the data from 273 Si3N4 modulus of rupture (MOR) bars which were tested at room temperature and 135 MOR bars which were tested at 1370 C. Milling time, sintering time, and sintering gas pressure were the processing parameters used as the input features. Flexural strength and density were the outputs by which the RBF networks were assessed. The 'nodes-at-data-points' method was used to set the hidden layer centers and output layer training used the gradient descent method. The RBF network predicted strength with an average error of less than 12 percent and density with an average error of less than 2 percent. Further, the RBF network demonstrated a potential for optimizing and accelerating the development and processing of ceramic materials.
TU-G-BRD-08: In-Vivo EPID Dosimetry: Quantifying the Detectability of Four Classes of Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ford, E; Phillips, M; Bojechko, C
Purpose: EPID dosimetry is an emerging method for treatment verification and QA. Given that the in-vivo EPID technique is in clinical use at some centers, we investigate the sensitivity and specificity for detecting different classes of errors. We assess the impact of these errors using dose volume histogram endpoints. Though data exist for EPID dosimetry performed pre-treatment, this is the first study quantifying its effectiveness when used during patient treatment (in-vivo). Methods: We analyzed 17 patients; EPID images of the exit dose were acquired and used to reconstruct the planar dose at isocenter. This dose was compared to the TPSmore » dose using a 3%/3mm gamma criteria. To simulate errors, modifications were made to treatment plans using four possible classes of error: 1) patient misalignment, 2) changes in patient body habitus, 3) machine output changes and 4) MLC misalignments. Each error was applied with varying magnitudes. To assess the detectability of the error, the area under a ROC curve (AUC) was analyzed. The AUC was compared to changes in D99 of the PTV introduced by the simulated error. Results: For systematic changes in the MLC leaves, changes in the machine output and patient habitus, the AUC varied from 0.78–0.97 scaling with the magnitude of the error. The optimal gamma threshold as determined by the ROC curve varied between 84–92%. There was little diagnostic power in detecting random MLC leaf errors and patient shifts (AUC 0.52–0.74). Some errors with weak detectability had large changes in D99. Conclusion: These data demonstrate the ability of EPID-based in-vivo dosimetry in detecting variations in patient habitus and errors related to machine parameters such as systematic MLC misalignments and machine output changes. There was no correlation found between the detectability of the error using the gamma pass rate, ROC analysis and the impact on the dose volume histogram. Funded by grant R18HS022244 from AHRQ.« less
Assessment of Surface Air Temperature over China Using Multi-criterion Model Ensemble Framework
NASA Astrophysics Data System (ADS)
Li, J.; Zhu, Q.; Su, L.; He, X.; Zhang, X.
2017-12-01
The General Circulation Models (GCMs) are designed to simulate the present climate and project future trends. It has been noticed that the performances of GCMs are not always in agreement with each other over different regions. Model ensemble techniques have been developed to post-process the GCMs' outputs and improve their prediction reliabilities. To evaluate the performances of GCMs, root-mean-square error, correlation coefficient, and uncertainty are commonly used statistical measures. However, the simultaneous achievements of these satisfactory statistics cannot be guaranteed when using many model ensemble techniques. Meanwhile, uncertainties and future scenarios are critical for Water-Energy management and operation. In this study, a new multi-model ensemble framework was proposed. It uses a state-of-art evolutionary multi-objective optimization algorithm, termed Multi-Objective Complex Evolution Global Optimization with Principle Component Analysis and Crowding Distance (MOSPD), to derive optimal GCM ensembles and demonstrate the trade-offs among various solutions. Such trade-off information was further analyzed with a robust Pareto front with respect to different statistical measures. A case study was conducted to optimize the surface air temperature (SAT) ensemble solutions over seven geographical regions of China for the historical period (1900-2005) and future projection (2006-2100). The results showed that the ensemble solutions derived with MOSPD algorithm are superior over the simple model average and any single model output during the historical simulation period. For the future prediction, the proposed ensemble framework identified that the largest SAT change would occur in the South Central China under RCP 2.6 scenario, North Eastern China under RCP 4.5 scenario, and North Western China under RCP 8.5 scenario, while the smallest SAT change would occur in the Inner Mongolia under RCP 2.6 scenario, South Central China under RCP 4.5 scenario, and South Central China under RCP 8.5 scenario.
A new method to estimate average hourly global solar radiation on the horizontal surface
NASA Astrophysics Data System (ADS)
Pandey, Pramod K.; Soupir, Michelle L.
2012-10-01
A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.
Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying
2017-08-01
Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.
Yang, Yongliang; Modares, Hamidreza; Wunsch, Donald C; Yin, Yixin
2018-06-01
This paper develops optimal control protocols for the distributed output synchronization problem of leader-follower multiagent systems with an active leader. Agents are assumed to be heterogeneous with different dynamics and dimensions. The desired trajectory is assumed to be preplanned and is generated by the leader. Other follower agents autonomously synchronize to the leader by interacting with each other using a communication network. The leader is assumed to be active in the sense that it has a nonzero control input so that it can act independently and update its control to keep the followers away from possible danger. A distributed observer is first designed to estimate the leader's state and generate the reference signal for each follower. Then, the output synchronization of leader-follower systems with an active leader is formulated as a distributed optimal tracking problem, and inhomogeneous algebraic Riccati equations (AREs) are derived to solve it. The resulting distributed optimal control protocols not only minimize the steady-state error but also optimize the transient response of the agents. An off-policy reinforcement learning algorithm is developed to solve the inhomogeneous AREs online in real time and without requiring any knowledge of the agents' dynamics. Finally, two simulation examples are conducted to illustrate the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.
2011-12-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
NASA Astrophysics Data System (ADS)
Qian, Y.; Yang, B.; Lin, G.; Leung, R.; Zhang, Y.
2012-04-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. The latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
NASA Astrophysics Data System (ADS)
Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.
2012-03-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic importance sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e. the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
Progress in navigation filter estimate fusion and its application to spacecraft rendezvous
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
1994-01-01
A new derivation of an algorithm which fuses the outputs of two Kalman filters is presented within the context of previous research in this field. Unlike other works, this derivation clearly shows the combination of estimates to be optimal, minimizing the trace of the fused covariance matrix. The algorithm assumes that the filters use identical models, and are stable and operating optimally with respect to their own local measurements. Evidence is presented which indicates that the error ellipsoid derived from the covariance of the optimally fused estimate is contained within the intersections of the error ellipsoids of the two filters being fused. Modifications which reduce the algorithm's data transmission requirements are also presented, including a scalar gain approximation, a cross-covariance update formula which employs only the two contributing filters' autocovariances, and a form of the algorithm which can be used to reinitialize the two Kalman filters. A sufficient condition for using the optimally fused estimates to periodically reinitialize the Kalman filters in this fashion is presented and proved as a theorem. When these results are applied to an optimal spacecraft rendezvous problem, simulated performance results indicate that the use of optimally fused data leads to significantly improved robustness to initial target vehicle state errors. The following applications of estimate fusion methods to spacecraft rendezvous are also described: state vector differencing, and redundancy management.
Genetic Algorithm for Optimization: Preprocessing with n Dimensional Bisection and Error Estimation
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam Ali
2006-01-01
A knowledge of the appropriate values of the parameters of a genetic algorithm (GA) such as the population size, the shrunk search space containing the solution, crossover and mutation probabilities is not available a priori for a general optimization problem. Recommended here is a polynomial-time preprocessing scheme that includes an n-dimensional bisection and that determines the foregoing parameters before deciding upon an appropriate GA for all problems of similar nature and type. Such a preprocessing is not only fast but also enables us to get the global optimal solution and its reasonably narrow error bounds with a high degree of confidence.
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The stochastic, infinite time, discrete output feedback problem for time invariant linear systems is examined. Two sets of sufficient conditions for the existence of a stable, globally optimal solution are presented. An expression for the total change in the cost function due to a change in the feedback gain is obtained. This expression is used to show that a sequence of gains can be obtained by an algorithm, so that the corresponding cost sequence is monotonically decreasing and the corresponding sequence of the cost gradient converges to zero. The algorithm is guaranteed to obtain a critical point of the cost function. The computational steps necessary to implement the algorithm on a computer are presented. The results are applied to a digital outer loop flight control problem. The numerical results for this 13th order problem indicate a rate of convergence considerably faster than two other algorithms used for comparison.
Global Optimal Trajectory in Chaos and NP-Hardness
NASA Astrophysics Data System (ADS)
Latorre, Vittorio; Gao, David Yang
This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory.
Sensitivity, optimal scaling and minimum roundoff errors in flexible structure models
NASA Technical Reports Server (NTRS)
Skelton, Robert E.
1987-01-01
Traditional modeling notions presume the existence of a truth model that relates the input to the output, without advanced knowledge of the input. This has led to the evolution of education and research approaches (including the available control and robustness theories) that treat the modeling and control design as separate problems. The paper explores the subtleties of this presumption that the modeling and control problems are separable. A detailed study of the nature of modeling errors is useful to gain insight into the limitations of traditional control and identification points of view. Modeling errors need not be small but simply appropriate for control design. Furthermore, the modeling and control design processes are inevitably iterative in nature.
Moving-window dynamic optimization: design of stimulation profiles for walking.
Dosen, Strahinja; Popović, Dejan B
2009-05-01
The overall goal of the research is to improve control for electrical stimulation-based assistance of walking in hemiplegic individuals. We present the simulation for generating offline input (sensors)-output (intensity of muscle stimulation) representation of walking that serves in synthesizing a rule-base for control of electrical stimulation for restoration of walking. The simulation uses new algorithm termed moving-window dynamic optimization (MWDO). The optimization criterion was to minimize the sum of the squares of tracking errors from desired trajectories with the penalty function on the total muscle efforts. The MWDO was developed in the MATLAB environment and tested using target trajectories characteristic for slow-to-normal walking recorded in healthy individual and a model with the parameters characterizing the potential hemiplegic user. The outputs of the simulation are piecewise constant intensities of electrical stimulation and trajectories generated when the calculated stimulation is applied to the model. We demonstrated the importance of this simulation by showing the outputs for healthy and hemiplegic individuals, using the same target trajectories. Results of the simulation show that the MWDO is an efficient tool for analyzing achievable trajectories and for determining the stimulation profiles that need to be delivered for good tracking.
Inference of Stochastic Nonlinear Oscillators with Applications to Physiological Problems
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Luchinsky, Dmitry G.
2004-01-01
A new method of inferencing of coupled stochastic nonlinear oscillators is described. The technique does not require extensive global optimization, provides optimal compensation for noise-induced errors and is robust in a broad range of dynamical models. We illustrate the main ideas of the technique by inferencing a model of five globally and locally coupled noisy oscillators. Specific modifications of the technique for inferencing hidden degrees of freedom of coupled nonlinear oscillators is discussed in the context of physiological applications.
2008-10-01
pseudorange measurement, ρi p denotes the geometric distance between the stations and satellite, dti denotes the receiver’s clock offsets, di p, denotes the...the DGPS data quality. For this purpose a number of over-sea flight legs were included in the trials. The ASHTECH DGPS altitude data output was both...12 where a short leg of a flight trial is represented (the full horizontal track is shown in Figure C-13). Figure C-12: Latitude Error (TANS – 3
Aeroelastic Modeling of X-56A Stiff-Wing Configuration Flight Test Data
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Boucher, Matthew J.
2017-01-01
Aeroelastic stability and control derivatives for the X-56A Multi-Utility Technology Testbed (MUTT), in the stiff-wing configuration, were estimated from flight test data using the output-error method. Practical aspects of the analysis are discussed. The orthogonal phase-optimized multisine inputs provided excellent data information for aeroelastic modeling. Consistent parameter estimates were determined using output error in both the frequency and time domains. The frequency domain analysis converged faster and was less sensitive to starting values for the model parameters, which was useful for determining the aeroelastic model structure and obtaining starting values for the time domain analysis. Including a modal description of the structure from a finite element model reduced the complexity of the estimation problem and improved the modeling results. Effects of reducing the model order on the short period stability and control derivatives were investigated.
Chen, Peng; Yang, Yixin; Wang, Yong; Ma, Yuanliang
2018-05-08
When sensor position errors exist, the performance of recently proposed interference-plus-noise covariance matrix (INCM)-based adaptive beamformers may be severely degraded. In this paper, we propose a weighted subspace fitting-based INCM reconstruction algorithm to overcome sensor displacement for linear arrays. By estimating the rough signal directions, we construct a novel possible mismatched steering vector (SV) set. We analyze the proximity of the signal subspace from the sample covariance matrix (SCM) and the space spanned by the possible mismatched SV set. After solving an iterative optimization problem, we reconstruct the INCM using the estimated sensor position errors. Then we estimate the SV of the desired signal by solving an optimization problem with the reconstructed INCM. The main advantage of the proposed algorithm is its robustness against SV mismatches dominated by unknown sensor position errors. Numerical examples show that even if the position errors are up to half of the assumed sensor spacing, the output signal-to-interference-plus-noise ratio is only reduced by 4 dB. Beam patterns plotted using experiment data show that the interference suppression capability of the proposed beamformer outperforms other tested beamformers.
Systematic errors of EIT systems determined by easily-scalable resistive phantoms.
Hahn, G; Just, A; Dittmar, J; Hellige, G
2008-06-01
We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.
Developing an A Priori Database for Passive Microwave Snow Water Retrievals Over Ocean
NASA Astrophysics Data System (ADS)
Yin, Mengtao; Liu, Guosheng
2017-12-01
A physically optimized a priori database is developed for Global Precipitation Measurement Microwave Imager (GMI) snow water retrievals over ocean. The initial snow water content profiles are derived from CloudSat Cloud Profiling Radar (CPR) measurements. A radiative transfer model in which the single-scattering properties of nonspherical snowflakes are based on the discrete dipole approximate results is employed to simulate brightness temperatures and their gradients. Snow water content profiles are then optimized through a one-dimensional variational (1D-Var) method. The standard deviations of the difference between observed and simulated brightness temperatures are in a similar magnitude to the observation errors defined for observation error covariance matrix after the 1D-Var optimization, indicating that this variational method is successful. This optimized database is applied in a Bayesian retrieval snow water algorithm. The retrieval results indicated that the 1D-Var approach has a positive impact on the GMI retrieved snow water content profiles by improving the physical consistency between snow water content profiles and observed brightness temperatures. Global distribution of snow water contents retrieved from the a priori database is compared with CloudSat CPR estimates. Results showed that the two estimates have a similar pattern of global distribution, and the difference of their global means is small. In addition, we investigate the impact of using physical parameters to subset the database on snow water retrievals. It is shown that using total precipitable water to subset the database with 1D-Var optimization is beneficial for snow water retrievals.
NASA Technical Reports Server (NTRS)
Broussard, John R.
1987-01-01
Relationships between observers, Kalman Filters and dynamic compensators using feedforward control theory are investigated. In particular, the relationship, if any, between the dynamic compensator state and linear functions of a discrete plane state are investigated. It is shown that, in steady state, a dynamic compensator driven by the plant output can be expressed as the sum of two terms. The first term is a linear combination of the plant state. The second term depends on plant and measurement noise, and the plant control. Thus, the state of the dynamic compensator can be expressed as an estimator of the first term with additive error given by the second term. Conditions under which a dynamic compensator is a Kalman filter are presented, and reduced-order optimal estimaters are investigated.
Dichroic beamsplitter for high energy laser diagnostics
LaFortune, Kai N [Livermore, CA; Hurd, Randall [Tracy, CA; Fochs, Scott N [Livermore, CA; Rotter, Mark D [San Ramon, CA; Hackel, Lloyd [Livermore, CA
2011-08-30
Wavefront control techniques are provided for the alignment and performance optimization of optical devices. A Shack-Hartmann wavefront sensor can be used to measure the wavefront distortion and a control system generates feedback error signal to optics inside the device to correct the wavefront. The system can be calibrated with a low-average-power probe laser. An optical element is provided to couple the optical device to a diagnostic/control package in a way that optimizes both the output power of the optical device and the coupling of the probe light into the diagnostics.
Self-optimization and auto-stabilization of receiver in DPSK transmission system.
Jang, Y S
2008-03-17
We propose a self-optimization and auto-stabilization method for a 1-bit DMZI in DPSK transmission. Using the characteristics of eye patterns, the optical frequency transmittance of a 1-bit DMZI is thermally controlled to maximize the power difference between the constructive and destructive output ports. Unlike other techniques, this control method can be realized without additional components, making it simple and cost effective. Experimental results show that error-free performance is maintained when the carrier optical frequency variation is approximately 10% of the data rate.
Model reference tracking control of an aircraft: a robust adaptive approach
NASA Astrophysics Data System (ADS)
Tanyer, Ilker; Tatlicioglu, Enver; Zergeroglu, Erkan
2017-05-01
This work presents the design and the corresponding analysis of a nonlinear robust adaptive controller for model reference tracking of an aircraft that has parametric uncertainties in its system matrices and additive state- and/or time-dependent nonlinear disturbance-like terms in its dynamics. Specifically, robust integral of the sign of the error feedback term and an adaptive term is fused with a proportional integral controller. Lyapunov-based stability analysis techniques are utilised to prove global asymptotic convergence of the output tracking error. Extensive numerical simulations are presented to illustrate the performance of the proposed robust adaptive controller.
NASA Astrophysics Data System (ADS)
Li, Xiaojing; Tang, Youmin; Yao, Zhixiong
2017-04-01
The predictability of the convection related to the Madden-Julian Oscillation (MJO) is studied using a coupled model CESM (Community Earth System Model) and the climatically relevant singular vector (CSV) approach. The CSV approach is an ensemble-based strategy to calculate the optimal initial error on climate scale. In this study, we focus on the optimal initial error of the sea surface temperature in Indian Ocean, where is the location of the MJO onset. Six MJO events are chosen from the 10 years model simulation output. The results show that the large values of the SVs are mainly located in the bay of Bengal and the south central IO (around (25°S, 90°E)), which is a meridional dipole-like pattern. The fast error growth of the CSVs have important impacts on the prediction of the convection related to the MJO. The initial perturbations with the SV pattern result in the deep convection damping more quickly in the east Pacific Ocean. Moreover, the sensitivity studies of the CSVs show that different initial fields do not affect the CSVs obviously, while the perturbation domain is a more responsive factor to the CSVs. The rapid growth of the CSVs is found to be related to the west bay of Bengal, where the wind stress starts to be perturbed due to the CSV initial error. These results contribute to the establishment of an ensemble prediction system, as well as the optimal observation network. In addition, the analysis of the error growth can provide us some enlightment about the relationship between SST and the intraseasonal convection related to the MJO.
Locatelli, R.; Bousquet, P.; Chevallier, F.; ...
2013-10-08
A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10more » synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. Here in our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr -1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr -1 in North America to 7 Tg yr -1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the consistency of transport model errors in current inverse systems.« less
Numerical optimization in Hilbert space using inexact function and gradient evaluations
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.
Precoded spatial multiplexing MIMO system with spatial component interleaver.
Gao, Xiang; Wu, Zhanji
In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.
On Inertial Body Tracking in the Presence of Model Calibration Errors
Miezal, Markus; Taetz, Bertram; Bleser, Gabriele
2016-01-01
In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity. PMID:27455266
Three-dimensional modeling of the cochlea by use of an arc fitting approach.
Schurzig, Daniel; Lexow, G Jakob; Majdani, Omid; Lenarz, Thomas; Rau, Thomas S
2016-12-01
A cochlea modeling approach is presented allowing for a user defined degree of geometry simplification which automatically adjusts to the patient specific anatomy. Model generation can be performed in a straightforward manner due to error estimation prior to the actual generation, thus minimizing modeling time. Therefore, the presented technique is well suited for a wide range of applications including finite element analyses where geometrical simplifications are often inevitable. The method is presented for n=5 cochleae which were segmented using a custom software for increased accuracy. The linear basilar membrane cross sections are expanded to areas while the scalae contours are reconstructed by a predefined number of arc segments. Prior to model generation, geometrical errors are evaluated locally for each cross section as well as globally for the resulting models and their basal turn profiles. The final combination of all reconditioned features to a 3D volume is performed in Autodesk Inventor using the loft feature. Due to the volume generation based on cubic splines, low errors could be achieved even for low numbers of arc segments and provided cross sections, both of which correspond to a strong degree of model simplification. Model generation could be performed in a time efficient manner. The proposed simplification method was proven to be well suited for the helical cochlea geometry. The generated output data can be imported into commercial software tools for various analyses representing a time efficient way to create cochlea models optimally suited for the desired task.
NASA Astrophysics Data System (ADS)
Peng, Qi; Guan, Weipeng; Wu, Yuxiang; Cai, Ye; Xie, Canyu; Wang, Pengfei
2018-01-01
This paper proposes a three-dimensional (3-D) high-precision indoor positioning strategy using Tabu search based on visible light communication. Tabu search is a powerful global optimization algorithm, and the 3-D indoor positioning can be transformed into an optimal solution problem. Therefore, in the 3-D indoor positioning, the optimal receiver coordinate can be obtained by the Tabu search algorithm. For all we know, this is the first time the Tabu search algorithm is applied to visible light positioning. Each light-emitting diode (LED) in the system broadcasts a unique identity (ID) and transmits the ID information. When the receiver detects optical signals with ID information from different LEDs, using the global optimization of the Tabu search algorithm, the 3-D high-precision indoor positioning can be realized when the fitness value meets certain conditions. Simulation results show that the average positioning error is 0.79 cm, and the maximum error is 5.88 cm. The extended experiment of trajectory tracking also shows that 95.05% positioning errors are below 1.428 cm. It can be concluded from the data that the 3-D indoor positioning based on the Tabu search algorithm achieves the requirements of centimeter level indoor positioning. The algorithm used in indoor positioning is very effective and practical and is superior to other existing methods for visible light indoor positioning.
Olson, Eric J.
2013-06-11
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
Applied Computational Electromagnetics Society Journal and Newletter, Volume 14 No. 1
1999-03-01
code validation, performance analysis, and input/output standardization; code or technique optimization and error minimization; innovations in...SOUTH AFRICA Alamo, CA, 94507-0516 USA Washington, DC 20330 USA MANAGING EDITOR Kueichien C. Hill Krishna Naishadham Richard W. Adler Wright Laboratory...INSTITUTIONAL MEMBERS ALLGON DERA Nasvagen 17 Common Road, Funtington INNOVATIVE DYNAMICS Akersberga, SWEDEN S-18425 Chichester, P018 9PD UK 2560 N. Triphammer
Photovoltaic Inverter Controllers Seeking AC Optimal Power Flow Solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Dhople, Sairaj V.; Giannakis, Georgios B.
This paper considers future distribution networks featuring inverter-interfaced photovoltaic (PV) systems, and addresses the synthesis of feedback controllers that seek real- and reactive-power inverter setpoints corresponding to AC optimal power flow (OPF) solutions. The objective is to bridge the temporal gap between long-term system optimization and real-time inverter control, and enable seamless PV-owner participation without compromising system efficiency and stability. The design of the controllers is grounded on a dual ..epsilon..-subgradient method, while semidefinite programming relaxations are advocated to bypass the non-convexity of AC OPF formulations. Global convergence of inverter output powers is analytically established for diminishing stepsize rules formore » cases where: i) computational limits dictate asynchronous updates of the controller signals, and ii) inverter reference inputs may be updated at a faster rate than the power-output settling time.« less
NASA Technical Reports Server (NTRS)
Alag, Gurbux S.; Gilyard, Glenn B.
1990-01-01
To develop advanced control systems for optimizing aircraft engine performance, unmeasurable output variables must be estimated. The estimation has to be done in an uncertain environment and be adaptable to varying degrees of modeling errors and other variations in engine behavior over its operational life cycle. This paper represented an approach to estimate unmeasured output variables by explicitly modeling the effects of off-nominal engine behavior as biases on the measurable output variables. A state variable model accommodating off-nominal behavior is developed for the engine, and Kalman filter concepts are used to estimate the required variables. Results are presented from nonlinear engine simulation studies as well as the application of the estimation algorithm on actual flight data. The formulation presented has a wide range of application since it is not restricted or tailored to the particular application described.
Upper bounds on sequential decoding performance parameters
NASA Technical Reports Server (NTRS)
Jelinek, F.
1974-01-01
This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.
High fidelity quantum gates with vibrational qubits.
Berrios, Eduardo; Gruebele, Martin; Shyshlov, Dmytro; Wang, Lei; Babikov, Dmitri
2012-11-26
Physical implementation of quantum gates acting on qubits does not achieve a perfect fidelity of 1. The actual output qubit may not match the targeted output of the desired gate. According to theoretical estimates, intrinsic gate fidelities >99.99% are necessary so that error correction codes can be used to achieve perfect fidelity. Here we test what fidelity can be accomplished for a CNOT gate executed by a shaped ultrafast laser pulse interacting with vibrational states of the molecule SCCl(2). This molecule has been used as a test system for low-fidelity calculations before. To make our test more stringent, we include vibrational levels that do not encode the desired qubits but are close enough in energy to interfere with population transfer by the laser pulse. We use two complementary approaches: optimal control theory determines what the best possible pulse can do; a more constrained physical model calculates what an experiment likely can do. Optimal control theory finds pulses with fidelity >0.9999, in excess of the quantum error correction threshold with 8 × 10(4) iterations. On the other hand, the physical model achieves only 0.9992 after 8 × 10(4) iterations. Both calculations converge as an inverse power law toward unit fidelity after >10(2) iterations/generations. In principle, the fidelities necessary for quantum error correction are reachable with qubits encoded by molecular vibrations. In practice, it will be challenging with current laboratory instrumentation because of slow convergence past fidelities of 0.99.
NASA Astrophysics Data System (ADS)
Luy, N. T.
2018-04-01
The design of distributed cooperative H∞ optimal controllers for multi-agent systems is a major challenge when the agents' models are uncertain multi-input and multi-output nonlinear systems in strict-feedback form in the presence of external disturbances. In this paper, first, the distributed cooperative H∞ optimal tracking problem is transformed into controlling the cooperative tracking error dynamics in affine form. Second, control schemes and online algorithms are proposed via adaptive dynamic programming (ADP) and the theory of zero-sum differential graphical games. The schemes use only one neural network (NN) for each agent instead of three from ADP to reduce computational complexity as well as avoid choosing initial NN weights for stabilising controllers. It is shown that despite not using knowledge of cooperative internal dynamics, the proposed algorithms not only approximate values to Nash equilibrium but also guarantee all signals, such as the NN weight approximation errors and the cooperative tracking errors in the closed-loop system, to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is shown by simulation results of an application to wheeled mobile multi-robot systems.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2008-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2010-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors, prediction cases, and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
Negative autoregulation matches production and demand in synthetic transcriptional networks.
Franco, Elisa; Giordano, Giulia; Forsberg, Per-Ola; Murray, Richard M
2014-08-15
We propose a negative feedback architecture that regulates activity of artificial genes, or "genelets", to meet their output downstream demand, achieving robustness with respect to uncertain open-loop output production rates. In particular, we consider the case where the outputs of two genelets interact to form a single assembled product. We show with analysis and experiments that negative autoregulation matches the production and demand of the outputs: the magnitude of the regulatory signal is proportional to the "error" between the circuit output concentration and its actual demand. This two-device system is experimentally implemented using in vitro transcriptional networks, where reactions are systematically designed by optimizing nucleic acid sequences with publicly available software packages. We build a predictive ordinary differential equation (ODE) model that captures the dynamics of the system and can be used to numerically assess the scalability of this architecture to larger sets of interconnected genes. Finally, with numerical simulations we contrast our negative autoregulation scheme with a cross-activation architecture, which is less scalable and results in slower response times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hillman, Benjamin R.; Marchand, Roger T.; Ackerman, Thomas P.
Satellite simulators are often used to account for limitations in satellite retrievals of cloud properties in comparisons between models and satellite observations. The purpose of the simulator framework is to enable more robust evaluation of model cloud properties, so that di erences between models and observations can more con dently be attributed to model errors. However, these simulators are subject to uncertainties themselves. A fundamental uncertainty exists in connecting the spatial scales at which cloud properties are retrieved with those at which clouds are simulated in global models. In this study, we create a series of sensitivity tests using 4more » km global model output from the Multiscale Modeling Framework to evaluate the sensitivity of simulated satellite retrievals when applied to climate models whose grid spacing is many tens to hundreds of kilometers. In particular, we examine the impact of cloud and precipitation overlap and of condensate spatial variability. We find the simulated retrievals are sensitive to these assumptions. Specifically, using maximum-random overlap with homogeneous cloud and precipitation condensate, which is often used in global climate models, leads to large errors in MISR and ISCCP-simulated cloud cover and in CloudSat-simulated radar reflectivity. To correct for these errors, an improved treatment of unresolved clouds and precipitation is implemented for use with the simulator framework and is shown to substantially reduce the identified errors.« less
Narayanan, Vignesh; Jagannathan, Sarangapani
2017-06-08
This paper presents an approximate optimal distributed control scheme for a known interconnected system composed of input affine nonlinear subsystems using event-triggered state and output feedback via a novel hybrid learning scheme. First, the cost function for the overall system is redefined as the sum of cost functions of individual subsystems. A distributed optimal control policy for the interconnected system is developed using the optimal value function of each subsystem. To generate the optimal control policy, forward-in-time, neural networks are employed to reconstruct the unknown optimal value function at each subsystem online. In order to retain the advantages of event-triggered feedback for an adaptive optimal controller, a novel hybrid learning scheme is proposed to reduce the convergence time for the learning algorithm. The development is based on the observation that, in the event-triggered feedback, the sampling instants are dynamic and results in variable interevent time. To relax the requirement of entire state measurements, an extended nonlinear observer is designed at each subsystem to recover the system internal states from the measurable feedback. Using a Lyapunov-based analysis, it is demonstrated that the system states and the observer errors remain locally uniformly ultimately bounded and the control policy converges to a neighborhood of the optimal policy. Simulation results are presented to demonstrate the performance of the developed controller.
Guaranteed convergence of the Hough transform
NASA Astrophysics Data System (ADS)
Soffer, Menashe; Kiryati, Nahum
1995-01-01
The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.
Optimization of an integrated wavelength monitor device
NASA Astrophysics Data System (ADS)
Wang, Pengfei; Brambilla, Gilberto; Semenova, Yuliya; Wu, Qiang; Farrell, Gerald
2011-05-01
In this paper an edge filter based on multimode interference in an integrated waveguide is optimized for a wavelength monitoring application. This can also be used as a demodulation element in a fibre Bragg grating sensing system. A global optimization algorithm is presented for the optimum design of the multimode interference device, including a range of parameters of the multimode waveguide, such as length, width and position of the input and output waveguides. The designed structure demonstrates the desired spectral response for wavelength measurements. Fabrication tolerance is also analysed numerically for this structure.
Evaluation of Diagnostic CO2 Flux and Transport Modeling in NU-WRF and GEOS-5
NASA Astrophysics Data System (ADS)
Kawa, S. R.; Collatz, G. J.; Tao, Z.; Wang, J. S.; Ott, L. E.; Liu, Y.; Andrews, A. E.; Sweeney, C.
2015-12-01
We report on recent diagnostic (constrained by observations) model simulations of atmospheric CO2 flux and transport using a newly developed facility in the NASA Unified-Weather Research and Forecast (NU-WRF) model. The results are compared to CO2 data (ground-based, airborne, and GOSAT) and to corresponding simulations from a global model that uses meteorology from the NASA GEOS-5 Modern Era Retrospective analysis for Research and Applications (MERRA). The objective of these intercomparisons is to assess the relative strengths and weaknesses of the respective models in pursuit of an overall carbon process improvement at both regional and global scales. Our guiding hypothesis is that the finer resolution and improved land surface representation in NU-WRF will lead to better comparisons with CO2 data than those using global MERRA, which will, in turn, inform process model development in global prognostic models. Initial intercomparison results, however, have generally been mixed: NU-WRF is better at some sites and times but not uniformly. We are examining the model transport processes in detail to diagnose differences in the CO2 behavior. These comparisons are done in the context of a long history of simulations from the Parameterized Chemistry and Transport Model, based on GEOS-5 meteorology and Carnegie Ames-Stanford Approach-Global Fire Emissions Database (CASA-GFED) fluxes, that capture much of the CO2 variation from synoptic to seasonal to global scales. We have run the NU-WRF model using unconstrained, internally generated meteorology within the North American domain, and with meteorological 'nudging' from Global Forecast System and North American Regional Reanalysis (NARR) in an effort to optimize the CO2 simulations. Output results constrained by NARR show the best comparisons to data. Discrepancies, of course, may arise either from flux or transport errors and compensating errors are possible. Resolving their interplay is also important to using the data in inverse models. Recent analysis is focused on planetary boundary depth, which can be significantly different between MERRA and NU-WRF, along with subgrid transport differences. Characterization of transport differences between the models will allow us to better constrain the CO2 fluxes, which is the major objective of this work.
Optimal estimation for global ground-level fine particulate matter concentrations
NASA Astrophysics Data System (ADS)
Donkelaar, Aaron; Martin, Randall V.; Spurr, Robert J. D.; Drury, Easan; Remer, Lorraine A.; Levy, Robert C.; Wang, Jun
2013-06-01
We develop an optimal estimation (OE) algorithm based on top-of-atmosphere reflectances observed by the MODIS satellite instrument to retrieve near-surface fine particulate matter (PM2.5). The GEOS-Chem chemical transport model is used to provide prior information for the Aerosol Optical Depth (AOD) retrieval and to relate total column AOD to PM2.5. We adjust the shape of the GEOS-Chem relative vertical extinction profiles by comparison with lidar retrievals from the CALIOP satellite instrument. Surface reflectance relationships used in the OE algorithm are indexed by land type. Error quantities needed for this OE algorithm are inferred by comparison with AOD observations taken by a worldwide network of sun photometers (AERONET) and extended globally based upon aerosol speciation and cross correlation for simulated values, and upon land type for observational values. Significant agreement in PM2.5 is found over North America for 2005 (slope = 0.89; r = 0.82; 1-σ error = 1 µg/m3 + 27%), with improved coverage and correlation relative to previous work for the same region and time period, although certain subregions, such as the San Joaquin Valley of California are better represented by previous estimates. Independently derived error estimates of the OE PM2.5 values at in situ locations over North America (of ±(2.5 µg/m3 + 31%) and Europe of ±(3.5 µg/m3 + 30%) are corroborated by comparison with in situ observations, although globally (error estimates of ±(3.0 µg/m3 + 35%), may be underestimated. Global population-weighted PM2.5 at 50% relative humidity is estimated as 27.8 µg/m3 at 0.1° × 0.1° resolution.
Dynamic Simulation of Human Gait Model With Predictive Capability.
Sun, Jinming; Wu, Shaoli; Voglewede, Philip A
2018-03-01
In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H; Yi, B; Prado, K
2015-06-15
Purpose: This work is to investigate the feasibility of a standardized monthly quality check (QC) of LINAC output determination in a multi-site, multi-LINAC institution. The QC was developed to determine individual LINAC output using the same optimized measurement setup and a constant calibration factor for all machines across the institution. Methods: The QA data over 4 years of 7 Varian machines over four sites, were analyzed. The monthly output constancy checks were performed using a fixed source-to-chamber-distance (SCD), with no couch position adjustment throughout the measurement cycle for all the photon energies: 6 and 18MV, and electron energies: 6, 9,more » 12, 16 and 20 MeV. The constant monthly output calibration factor (Nconst) was determined by averaging the machines’ output data, acquired with the same monthly ion chamber. If a different monthly ion chamber was used, Nconst was then re-normalized to consider its different NDW,Co-60. Here, the possible changes of Nconst over 4 years have been tracked, and the precision of output results based on this standardized monthly QA program relative to the TG-51 calibration for each machine was calculated. Any outlier of the group was investigated. Results: The possible changes of Nconst varied between 0–0.9% over 4 years. The normalization of absorbed-dose-to-water calibration factors corrects for up to 3.3% variations of different monthly QA chambers. The LINAC output precision based on this standardized monthly QC relative to the TG-51 output calibration is within 1% for 6MV photon energy and 2% for 18MV and all the electron energies. A human error in one TG-51 report was found through a close scrutiny of outlier data. Conclusion: This standardized QC allows for a reasonably simplified, precise and robust monthly LINAC output constancy check, with the increased sensitivity needed to detect possible human errors and machine problems.« less
NASA Astrophysics Data System (ADS)
Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling
2018-06-01
Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, Stefan A.
2010-11-01
iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional , multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. It performs sensitivity analysis, parameter estimation, and uncertainty propagation, analysis in geosciences and reservoir engineering and other application areas. It supports a number of different combination of fluids and components [equation-of-state (EOS) modules]. In addition, the optimization routines implemented in iTOUGH2 can also be used or sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files. This link is achieved by means of the PEST application programmingmore » interface. iTOUGH2 solves the inverse problem by minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative fee, gradient-based and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlos simulation for uncertainty propagation analysis. A detailed residual and error analysis is provided. This upgrade includes new EOS modules (specifically EOS7c, ECO2N and TMVOC), hysteretic relative permeability and capillary pressure functions and the PEST API. More details can be found at http://esd.lbl.gov/iTOUGH2 and the publications cited there. Hardware Req.: Multi-platform; Related/auxiliary software PVM (if running in parallel).« less
Robust Transceiver Design for Multiuser MIMO Downlink with Channel Uncertainties
NASA Astrophysics Data System (ADS)
Miao, Wei; Li, Yunzhou; Chen, Xiang; Zhou, Shidong; Wang, Jing
This letter addresses the problem of robust transceiver design for the multiuser multiple-input-multiple-output (MIMO) downlink where the channel state information at the base station (BS) is imperfect. A stochastic approach which minimizes the expectation of the total mean square error (MSE) of the downlink conditioned on the channel estimates under a total transmit power constraint is adopted. The iterative algorithm reported in [2] is improved to handle the proposed robust optimization problem. Simulation results show that our proposed robust scheme effectively reduces the performance loss due to channel uncertainties and outperforms existing methods, especially when the channel errors of the users are different.
Parametric Optimization of Thermoelectric Generators for Waste Heat Recovery
NASA Astrophysics Data System (ADS)
Huang, Shouyuan; Xu, Xianfan
2016-10-01
This paper presents a methodology for design optimization of thermoelectric-based waste heat recovery systems called thermoelectric generators (TEGs). The aim is to maximize the power output from thermoelectrics which are used as add-on modules to an existing gas-phase heat exchanger, without negative impacts, e.g., maintaining a minimum heat dissipation rate from the hot side. A numerical model is proposed for TEG coupled heat transfer and electrical power output. This finite-volume-based model simulates different types of heat exchangers, i.e., counter-flow and cross-flow, for TEGs. Multiple-filled skutterudites and bismuth-telluride-based thermoelectric modules (TEMs) are applied, respectively, in higher and lower temperature regions. The response surface methodology is implemented to determine the optimized TEG size along and across the flow direction and the height of thermoelectric couple legs, and to analyze their covariance and relative sensitivity. A genetic algorithm is employed to verify the globality of the optimum. The presented method will be generally useful for optimizing heat-exchanger-based TEG performance.
DCS-Neural-Network Program for Aircraft Control and Testing
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
2006-01-01
A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.
Spatial interpolation of solar global radiation
NASA Astrophysics Data System (ADS)
Lussana, C.; Uboldi, F.; Antoniazzi, C.
2010-09-01
Solar global radiation is defined as the radiant flux incident onto an area element of the terrestrial surface. Its direct knowledge plays a crucial role in many applications, from agrometeorology to environmental meteorology. The ARPA Lombardia's meteorological network includes about one hundred of pyranometers, mostly distributed in the southern part of the Alps and in the centre of the Po Plain. A statistical interpolation method based on an implementation of the Optimal Interpolation is applied to the hourly average of the solar global radiation observations measured by the ARPA Lombardia's network. The background field is obtained using SMARTS (The Simple Model of the Atmospheric Radiative Transfer of Sunshine, Gueymard, 2001). The model is initialised by assuming clear sky conditions and it takes into account the solar position and orography related effects (shade and reflection). The interpolation of pyranometric observations introduces in the analysis fields information about cloud presence and influence. A particular effort is devoted to prevent observations affected by large errors of different kinds (representativity errors, systematic errors, gross errors) from entering the analysis procedure. The inclusion of direct cloud information from satellite observations is also planned.
NASA Astrophysics Data System (ADS)
Edalati, L.; Khaki Sedigh, A.; Aliyari Shooredeli, M.; Moarefianpour, A.
2018-02-01
This paper deals with the design of adaptive fuzzy dynamic surface control for uncertain strict-feedback nonlinear systems with asymmetric time-varying output constraints in the presence of input saturation. To approximate the unknown nonlinear functions and overcome the problem of explosion of complexity, a Fuzzy logic system is combined with the dynamic surface control in the backstepping design technique. To ensure the output constraints satisfaction, an asymmetric time-varying Barrier Lyapunov Function (BLF) is used. Moreover, by applying the minimal learning parameter technique, the number of the online parameters update for each subsystem is reduced to 2. Hence, the semi-globally uniformly ultimately boundedness (SGUUB) of all the closed-loop signals with appropriate tracking error convergence is guaranteed. The effectiveness of the proposed control is demonstrated by two simulation examples.
NASA Astrophysics Data System (ADS)
Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin
2018-04-01
This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.
Optimizing Internal Wave Drag in a Forward Barotropic Model with Semidiurnal Tides
2015-01-23
Center 875 North Randolph Street, Suite 1425 Arlington, VA 22203-1995 ONR Approved for public release, distribution is unlimited. A global tuning...factor with a larger value in the Atlantic. Our best global mean RMS error of 4.4 cm for areas deeper than 1000 m and equatorward of 66_ is among the...lowest obtained in a forward barotropic tide model. Barotropic tides; Global modeling; Linear wave drag Unclassified Unclassified Unclassified UU
Beam pointing angle optimization and experiments for vehicle laser Doppler velocimetry
NASA Astrophysics Data System (ADS)
Fan, Zhe; Hu, Shuling; Zhang, Chunxi; Nie, Yanju; Li, Jun
2015-10-01
Beam pointing angle (BPA) is one of the key parameters that affects the operation performance of the laser Doppler velocimetry (LDV) system. By considering velocity sensitivity and echo power, for the first time, the optimized BPA of vehicle LDV is analyzed. Assuming mounting error is within ±1.0 deg, the reflectivity and roughness are variable for different scenarios, the optimized BPA is obtained in the range from 29 to 43 deg. Therefore, velocity sensitivity is in the range of 1.25 to 1.76 MHz/(m/s), and the percentage of normalized echo power at optimized BPA with respect to that at 0 deg is greater than 53.49%. Laboratory experiments with a rotating table are done with different BPAs of 10, 35, and 66 deg, and the results coincide with the theoretical analysis. Further, vehicle experiment with optimized BPA of 35 deg is conducted by comparison with microwave radar (accuracy of ±0.5% full scale output). The root-mean-square error of LDV's results is smaller than the Microstar II's, 0.0202 and 0.1495 m/s, corresponding to LDV and Microstar II, respectively, and the mean velocity discrepancy is 0.032 m/s. It is also proven that with the optimized BPA both high velocity sensitivity and acceptable echo power can simultaneously be guaranteed.
Abdelkarim, Noha; Mohamed, Amr E; El-Garhy, Ahmed M; Dorrah, Hassen T
2016-01-01
The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller.
Mohamed, Amr E.; Dorrah, Hassen T.
2016-01-01
The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller. PMID:27807444
Tavakoli, Behnoosh; Zhu, Quing
2013-01-01
Ultrasound-guided diffuse optical tomography (DOT) is a promising method for characterizing malignant and benign lesions in the female breast. We introduce a new two-step algorithm for DOT inversion in which the optical parameters are estimated with the global optimization method, genetic algorithm. The estimation result is applied as an initial guess to the conjugate gradient (CG) optimization method to obtain the absorption and scattering distributions simultaneously. Simulations and phantom experiments have shown that the maximum absorption and reduced scattering coefficients are reconstructed with less than 10% and 25% errors, respectively. This is in contrast with the CG method alone, which generates about 20% error for the absorption coefficient and does not accurately recover the scattering distribution. A new measure of scattering contrast has been introduced to characterize benign and malignant breast lesions. The results of 16 clinical cases reconstructed with the two-step method demonstrates that, on average, the absorption coefficient and scattering contrast of malignant lesions are about 1.8 and 3.32 times higher than the benign cases, respectively.
Identification of dynamic systems, theory and formulation
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1985-01-01
The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.
Realizable optimal control for a remotely piloted research vehicle. [stability augmentation
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1980-01-01
The design of a control system using the linear-quadratic regulator (LQR) control law theory for time invariant systems in conjunction with an incremental gradient procedure is presented. The incremental gradient technique reduces the full-state feedback controller design, generated by the LQR algorithm, to a realizable design. With a realizable controller, the feedback gains are based only on the available system outputs instead of being based on the full-state outputs. The design is for a remotely piloted research vehicle (RPRV) stability augmentation system. The design includes methods for accounting for noisy measurements, discrete controls with zero-order-hold outputs, and computational delay errors. Results from simulation studies of the response of the RPRV to a step in the elevator and frequency analysis techniques are included to illustrate these abnormalities and their influence on the controller design.
NASA Technical Reports Server (NTRS)
Berke, Laszlo; Patnaik, Surya N.; Murthy, Pappu L. N.
1993-01-01
The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated by using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network with the code NETS. Optimum designs for new design conditions were predicted by using the trained network. Neural net prediction of optimum designs was found to be satisfactory for most of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klymenko, M. V.; Remacle, F., E-mail: fremacle@ulg.ac.be
2014-10-28
A methodology is proposed for designing a low-energy consuming ternary-valued full adder based on a quantum dot (QD) electrostatically coupled with a single electron transistor operating as a charge sensor. The methodology is based on design optimization: the values of the physical parameters of the system required for implementing the logic operations are optimized using a multiobjective genetic algorithm. The searching space is determined by elements of the capacitance matrix describing the electrostatic couplings in the entire device. The objective functions are defined as the maximal absolute error over actual device logic outputs relative to the ideal truth tables formore » the sum and the carry-out in base 3. The logic units are implemented on the same device: a single dual-gate quantum dot and a charge sensor. Their physical parameters are optimized to compute either the sum or the carry out outputs and are compatible with current experimental capabilities. The outputs are encoded in the value of the electric current passing through the charge sensor, while the logic inputs are supplied by the voltage levels on the two gate electrodes attached to the QD. The complex logic ternary operations are directly implemented on an extremely simple device, characterized by small sizes and low-energy consumption compared to devices based on switching single-electron transistors. The design methodology is general and provides a rational approach for realizing non-switching logic operations on QD devices.« less
Siddiqui, Hasib; Bouman, Charles A
2007-03-01
Conventional halftoning methods employed in electrophotographic printers tend to produce Moiré artifacts when used for printing images scanned from printed material, such as books and magazines. We present a novel approach for descreening color scanned documents aimed at providing an efficient solution to the Moiré problem in practical imaging devices, including copiers and multifunction printers. The algorithm works by combining two nonlinear image-processing techniques, resolution synthesis-based denoising (RSD), and modified smallest univalue segment assimilating nucleus (SUSAN) filtering. The RSD predictor is based on a stochastic image model whose parameters are optimized beforehand in a separate training procedure. Using the optimized parameters, RSD classifies the local window around the current pixel in the scanned image and applies filters optimized for the selected classes. The output of the RSD predictor is treated as a first-order estimate to the descreened image. The modified SUSAN filter uses the output of RSD for performing an edge-preserving smoothing on the raw scanned data and produces the final output of the descreening algorithm. Our method does not require any knowledge of the screening method, such as the screen frequency or dither matrix coefficients, that produced the printed original. The proposed scheme not only suppresses the Moiré artifacts, but, in addition, can be trained with intrinsic sharpening for deblurring scanned documents. Finally, once optimized for a periodic clustered-dot halftoning method, the same algorithm can be used to inverse halftone scanned images containing stochastic error diffusion halftone noise.
Efficient global fiber tracking on multi-dimensional diffusion direction maps
NASA Astrophysics Data System (ADS)
Klein, Jan; Köhler, Benjamin; Hahn, Horst K.
2012-02-01
Global fiber tracking algorithms have recently been proposed which were able to compute results of unprecedented quality. They account for avoiding accumulation errors by a global optimization process at the cost of a high computation time of several hours or even days. In this paper, we introduce a novel global fiber tracking algorithm which, for the first time, globally optimizes the underlying diffusion direction map obtained from DTI or HARDI data, instead of single fiber segments. As a consequence, the number of iterations in the optimization process can drastically be reduced by about three orders of magnitude. Furthermore, in contrast to all previous algorithms, the density of the tracked fibers can be adjusted after the optimization within a few seconds. We evaluated our method for diffusion-weighted images obtained from software phantoms, healthy volunteers, and tumor patients. We show that difficult fiber bundles, e.g., the visual pathways or tracts for different motor functions can be determined and separated in an excellent quality. Furthermore, crossing and kissing bundles are correctly resolved. On current standard hardware, a dense fiber tracking result of a whole brain can be determined in less than half an hour which is a strong improvement compared to previous work.
Xia, Youshen; Kamel, Mohamed S
2007-06-01
Identification of a general nonlinear noisy system viewed as an estimation of a predictor function is studied in this article. A measurement fusion method for the predictor function estimate is proposed. In the proposed scheme, observed data are first fused by using an optimal fusion technique, and then the optimal fused data are incorporated in a nonlinear function estimator based on a robust least squares support vector machine (LS-SVM). A cooperative learning algorithm is proposed to implement the proposed measurement fusion method. Compared with related identification methods, the proposed method can minimize both the approximation error and the noise error. The performance analysis shows that the proposed optimal measurement fusion function estimate has a smaller mean square error than the LS-SVM function estimate. Moreover, the proposed cooperative learning algorithm can converge globally to the optimal measurement fusion function estimate. Finally, the proposed measurement fusion method is applied to ARMA signal and spatial temporal signal modeling. Experimental results show that the proposed measurement fusion method can provide a more accurate model.
2017-09-01
in the vertical (z) directions. There are several instruments controls like proportional, integral , and derivative (PID) gain as well as tip force...the PID control, where P stands for proportional gain, I stands for integral gain, and D stands for derivative gain. An additional parameter that...contributes to the scanned image quality is set point. Proportional gain is multiplied by the error to adjust controller output and integral gain sums
Optimal generalized multistep integration formulae for real-time digital simulation
NASA Technical Reports Server (NTRS)
Moerder, D. D.; Halyo, N.
1985-01-01
The problem of discretizing a dynamical system for real-time digital simulation is considered. Treating the system and its simulation as stochastic processes leads to a statistical characterization of simulator fidelity. A plant discretization procedure based on an efficient matrix generalization of explicit linear multistep discrete integration formulae is introduced, which minimizes a weighted sum of the mean squared steady-state and transient error between the system and simulator outputs.
Bilinear modeling and nonlinear estimation
NASA Technical Reports Server (NTRS)
Dwyer, Thomas A. W., III; Karray, Fakhreddine; Bennett, William H.
1989-01-01
New methods are illustrated for online nonlinear estimation applied to the lateral deflection of an elastic beam on board measurements of angular rates and angular accelerations. The development of the filter equations, together with practical issues of their numerical solution as developed from global linearization by nonlinear output injection are contrasted with the usual method of the extended Kalman filter (EKF). It is shown how nonlinear estimation due to gyroscopic coupling can be implemented as an adaptive covariance filter using off-the-shelf Kalman filter algorithms. The effect of the global linearization by nonlinear output injection is to introduce a change of coordinates in which only the process noise covariance is to be updated in online implementation. This is in contrast to the computational approach which arises in EKF methods arising by local linearization with respect to the current conditional mean. Processing refinements for nonlinear estimation based on optimal, nonlinear interpolation between observations are also highlighted. In these methods the extrapolation of the process dynamics between measurement updates is obtained by replacing a transition matrix with an operator spline that is optimized off-line from responses to selected test inputs.
Halim, Dunant; Cheng, Li; Su, Zhongqing
2011-04-01
The work proposed an optimization approach for structural sensor placement to improve the performance of vibro-acoustic virtual sensor for active noise control applications. The vibro-acoustic virtual sensor was designed to estimate the interior sound pressure of an acoustic-structural coupled enclosure using structural sensors. A spectral-spatial performance metric was proposed, which was used to quantify the averaged structural sensor output energy of a vibro-acoustic system excited by a spatially varying point source. It was shown that (i) the overall virtual sensing error energy was contributed additively by the modal virtual sensing error and the measurement noise energy; (ii) each of the modal virtual sensing error system was contributed by both the modal observability levels for the structural sensing and the target acoustic virtual sensing; and further (iii) the strength of each modal observability level was influenced by the modal coupling and resonance frequencies of the associated uncoupled structural/cavity modes. An optimal design of structural sensor placement was proposed to achieve sufficiently high modal observability levels for certain important panel- and cavity-controlled modes. Numerical analysis on a panel-cavity system demonstrated the importance of structural sensor placement on virtual sensing and active noise control performance, particularly for cavity-controlled modes.
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.
Piecewise compensation for the nonlinear error of fiber-optic gyroscope scale factor
NASA Astrophysics Data System (ADS)
Zhang, Yonggang; Wu, Xunfeng; Yuan, Shun; Wu, Lei
2013-08-01
Fiber-Optic Gyroscope (FOG) scale factor nonlinear error will result in errors in Strapdown Inertial Navigation System (SINS). In order to reduce nonlinear error of FOG scale factor in SINS, a compensation method is proposed in this paper based on curve piecewise fitting of FOG output. Firstly, reasons which can result in FOG scale factor error are introduced and the definition of nonlinear degree is provided. Then we introduce the method to divide the output range of FOG into several small pieces, and curve fitting is performed in each output range of FOG to obtain scale factor parameter. Different scale factor parameters of FOG are used in different pieces to improve FOG output precision. These parameters are identified by using three-axis turntable, and nonlinear error of FOG scale factor can be reduced. Finally, three-axis swing experiment of SINS verifies that the proposed method can reduce attitude output errors of SINS by compensating the nonlinear error of FOG scale factor and improve the precision of navigation. The results of experiments also demonstrate that the compensation scheme is easy to implement. It can effectively compensate the nonlinear error of FOG scale factor with slightly increased computation complexity. This method can be used in inertial technology based on FOG to improve precision.
Optimizing UV Index determination from broadband irradiances
NASA Astrophysics Data System (ADS)
Tereszchuk, Keith A.; Rochon, Yves J.; McLinden, Chris A.; Vaillancourt, Paul A.
2018-03-01
A study was undertaken to improve upon the prognosticative capability of Environment and Climate Change Canada's (ECCC) UV Index forecast model. An aspect of that work, and the topic of this communication, was to investigate the use of the four UV broadband surface irradiance fields generated by ECCC's Global Environmental Multiscale (GEM) numerical prediction model to determine the UV Index. The basis of the investigation involves the creation of a suite of routines which employ high-spectral-resolution radiative transfer code developed to calculate UV Index fields from GEM forecasts. These routines employ a modified version of the Cloud-J v7.4 radiative transfer model, which integrates GEM output to produce high-spectral-resolution surface irradiance fields. The output generated using the high-resolution radiative transfer code served to verify and calibrate GEM broadband surface irradiances under clear-sky conditions and their use in providing the UV Index. A subsequent comparison of irradiances and UV Index under cloudy conditions was also performed. Linear correlation agreement of surface irradiances from the two models for each of the two higher UV bands covering 310.70-330.0 and 330.03-400.00 nm is typically greater than 95 % for clear-sky conditions with associated root-mean-square relative errors of 6.4 and 4.0 %. However, underestimations of clear-sky GEM irradiances were found on the order of ˜ 30-50 % for the 294.12-310.70 nm band and by a factor of ˜ 30 for the 280.11-294.12 nm band. This underestimation can be significant for UV Index determination but would not impact weather forecasting. Corresponding empirical adjustments were applied to the broadband irradiances now giving a correlation coefficient of unity. From these, a least-squares fitting was derived for the calculation of the UV Index. The resultant differences in UV indices from the high-spectral-resolution irradiances and the resultant GEM broadband irradiances are typically within 0.2-0.3 with a root-mean-square relative error in the scatter of ˜ 6.6 % for clear-sky conditions. Similar results are reproduced under cloudy conditions with light to moderate clouds, with a relative error comparable to the clear-sky counterpart; under strong attenuation due to clouds, a substantial increase in the root-mean-square relative error of up to 35 % is observed due to differing cloud radiative transfer models.
Evaluation of hydrologic components of community land model 4 and bias identification
Du, Enhao; Vittorio, Alan Di; Collins, William D.
2015-04-01
Runoff and soil moisture are two key components of the global hydrologic cycle that should be validated at local to global scales in Earth System Models (ESMs) used for climate projection. Here, we have evaluated the runoff and surface soil moisture output by the Community Climate System Model (CCSM) along with 8 other models from the Coupled Model Intercomparison Project (CMIP5) repository using satellite soil moisture observations and stream gauge corrected runoff products. A series of Community Land Model (CLM) runs forced by reanalysis and coupled model outputs was also performed to identify atmospheric drivers of biases and uncertainties inmore » the CCSM. Results indicate that surface soil moisture simulations tend to be positively biased in high latitude areas by most selected CMIP5 models except CCSM, FGOALS, and BCC, which share similar land surface model code. With the exception of GISS, runoff simulations by all selected CMIP5 models were overestimated in mountain ranges and in most of the Arctic region. In general, positive biases in CCSM soil moisture and runoff due to precipitation input error were offset by negative biases induced by temperature input error. Excluding the impact from atmosphere modeling, the global mean of seasonal surface moisture oscillation was out of phase compared to observations in many years during 1985–2004. The CLM also underestimated runoff in the Amazon, central Africa, and south Asia, where soils all have high clay content. We hypothesize that lack of a macropore flow mechanism is partially responsible for this underestimation. However, runoff was overestimated in the areas covered by volcanic ash soils (i.e., Andisols), which might be associated with poor soil porosity representation in CLM. Finally, our results indicate that CCSM predictability of hydrology could be improved by addressing the compensating errors associated with precipitation and temperature and updating the CLM soil representation.« less
Zouari, Farouk; Ibeas, Asier; Boulkroune, Abdesselem; Cao, Jinde; Mehdi Arefi, Mohammad
2018-06-01
This study addresses the issue of the adaptive output tracking control for a category of uncertain nonstrict-feedback delayed incommensurate fractional-order systems in the presence of nonaffine structures, unmeasured pseudo-states, unknown control directions, unknown actuator nonlinearities and output constraints. Firstly, the mean value theorem and the Gaussian error function are introduced to eliminate the difficulties that arise from the nonaffine structures and the unknown actuator nonlinearities, respectively. Secondly, the immeasurable tracking error variables are suitably estimated by constructing a fractional-order linear observer. Thirdly, the neural network, the Razumikhin Lemma, the variable separation approach, and the smooth Nussbaum-type function are used to deal with the uncertain nonlinear dynamics, the unknown time-varying delays, the nonstrict feedback and the unknown control directions, respectively. Fourthly, asymmetric barrier Lyapunov functions are employed to overcome the violation of the output constraints and to tune online the parameters of the adaptive neural controller. Through rigorous analysis, it is proved that the boundedness of all variables in the closed-loop system and the semi global asymptotic tracking are ensured without transgression of the constraints. The principal contributions of this study can be summarized as follows: (1) based on Caputo's definitions and new lemmas, methods concerning the controllability, observability and stability analysis of integer-order systems are extended to fractional-order ones, (2) the output tracking objective for a relatively large class of uncertain systems is achieved with a simple controller and less tuning parameters. Finally, computer-simulation studies from the robotic field are given to demonstrate the effectiveness of the proposed controller. Copyright © 2018 Elsevier Ltd. All rights reserved.
Guo, Y C; Wang, H; Wu, H P; Zhang, M Q
2015-12-21
Aimed to address the defects of the large mean square error (MSE), and the slow convergence speed in equalizing the multi-modulus signals of the constant modulus algorithm (CMA), a multi-modulus algorithm (MMA) based on global artificial fish swarm (GAFS) intelligent optimization of DNA encoding sequences (GAFS-DNA-MMA) was proposed. To improve the convergence rate and reduce the MSE, this proposed algorithm adopted an encoding method based on DNA nucleotide chains to provide a possible solution to the problem. Furthermore, the GAFS algorithm, with its fast convergence and global search ability, was used to find the best sequence. The real and imaginary parts of the initial optimal weight vector of MMA were obtained through DNA coding of the best sequence. The simulation results show that the proposed algorithm has a faster convergence speed and smaller MSE in comparison with the CMA, the MMA, and the AFS-DNA-MMA.
NASA Astrophysics Data System (ADS)
Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles
2008-12-01
We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.
Fisher information and asymptotic normality in system identification for quantum Markov chains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guta, Madalin
2011-06-15
This paper deals with the problem of estimating the coupling constant {theta} of a mixing quantum Markov chain. For a repeated measurement on the chain's output we show that the outcomes' time average has an asymptotically normal (Gaussian) distribution, and we give the explicit expressions of its mean and variance. In particular, we obtain a simple estimator of {theta} whose classical Fisher information can be optimized over different choices of measured observables. We then show that the quantum state of the output together with the system is itself asymptotically Gaussian and compute its quantum Fisher information, which sets an absolutemore » bound to the estimation error. The classical and quantum Fisher information are compared in a simple example. In the vicinity of {theta}=0 we find that the quantum Fisher information has a quadratic rather than linear scaling in output size, and asymptotically the Fisher information is localized in the system, while the output is independent of the parameter.« less
Computational logic with square rings of nanomagnets
NASA Astrophysics Data System (ADS)
Arava, Hanu; Derlet, Peter M.; Vijayakumar, Jaianth; Cui, Jizhai; Bingham, Nicholas S.; Kleibert, Armin; Heyderman, Laura J.
2018-06-01
Nanomagnets are a promising low-power alternative to traditional computing. However, the successful implementation of nanomagnets in logic gates has been hindered so far by a lack of reliability. Here, we present a novel design with dipolar-coupled nanomagnets arranged on a square lattice to (i) support transfer of information and (ii) perform logic operations. We introduce a thermal protocol, using thermally active nanomagnets as a means to perform computation. Within this scheme, the nanomagnets are initialized by a global magnetic field and thermally relax on raising the temperature with a resistive heater. We demonstrate error-free transfer of information in chains of up to 19 square rings and we show a high level of reliability with successful gate operations of ∼94% across more than 2000 logic gates. Finally, we present a functionally complete prototype NAND/NOR logic gate that could be implemented for advanced logic operations. Here we support our experiments with simulations of the thermally averaged output and determine the optimal gate parameters. Our approach provides a new pathway to a long standing problem concerning reliability in the use of nanomagnets for computation.
Global optimization of minority game by intelligent agents
NASA Astrophysics Data System (ADS)
Xie, Yan-Bo; Wang, Bing-Hong; Hu, Chin-Kun; Zhou, Tao
2005-10-01
We propose a new model of minority game with intelligent agents who use trail and error method to make a choice such that the standard deviation σ2 and the total loss in this model reach the theoretical minimum values in the long time limit and the global optimization of the system is reached. This suggests that the economic systems can self-organize into a highly optimized state by agents who make decisions based on inductive thinking, limited knowledge, and capabilities. When other kinds of agents are also present, the simulation results and analytic calculations show that the intelligent agent can gain profits from producers and are much more competent than the noise traders and conventional agents in original minority games proposed by Challet and Zhang.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.
Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor stimulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly, and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adaptingmore » Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM execution time is proportionate to the number of triangle changes per frame, which is typically a few percent of the output mesh size, hence ROAM performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.« less
An error-tuned model for sensorimotor learning
Sadeghi, Mohsen; Wolpert, Daniel M.
2017-01-01
Current models of sensorimotor control posit that motor commands are generated by combining multiple modules which may consist of internal models, motor primitives or motor synergies. The mechanisms which select modules based on task requirements and modify their output during learning are therefore critical to our understanding of sensorimotor control. Here we develop a novel modular architecture for multi-dimensional tasks in which a set of fixed primitives are each able to compensate for errors in a single direction in the task space. The contribution of the primitives to the motor output is determined by both top-down contextual information and bottom-up error information. We implement this model for a task in which subjects learn to manipulate a dynamic object whose orientation can vary. In the model, visual information regarding the context (the orientation of the object) allows the appropriate primitives to be engaged. This top-down module selection is implemented by a Gaussian function tuned for the visual orientation of the object. Second, each module's contribution adapts across trials in proportion to its ability to decrease the current kinematic error. Specifically, adaptation is implemented by cosine tuning of primitives to the current direction of the error, which we show to be theoretically optimal for reducing error. This error-tuned model makes two novel predictions. First, interference should occur between alternating dynamics only when the kinematic errors associated with each oppose one another. In contrast, dynamics which lead to orthogonal errors should not interfere. Second, kinematic errors alone should be sufficient to engage the appropriate modules, even in the absence of contextual information normally provided by vision. We confirm both these predictions experimentally and show that the model can also account for data from previous experiments. Our results suggest that two interacting processes account for module selection during sensorimotor control and learning. PMID:29253869
Flyback CCM inverter for AC module applications: iterative learning control and convergence analysis
NASA Astrophysics Data System (ADS)
Lee, Sung-Ho; Kim, Minsung
2017-12-01
This paper presents an iterative learning controller (ILC) for an interleaved flyback inverter operating in continuous conduction mode (CCM). The flyback CCM inverter features small output ripple current, high efficiency, and low cost, and hence it is well suited for photovoltaic power applications. However, it exhibits the non-minimum phase behaviour, because its transfer function from control duty to output current has the right-half-plane (RHP) zero. Moreover, the flyback CCM inverter suffers from the time-varying grid voltage disturbance. Thus, conventional control scheme results in inaccurate output tracking. To overcome these problems, the ILC is first developed and applied to the flyback inverter operating in CCM. The ILC makes use of both predictive and current learning terms which help the system output to converge to the reference trajectory. We take into account the nonlinear averaged model and use it to construct the proposed controller. It is proven that the system output globally converges to the reference trajectory in the absence of state disturbances, output noises, or initial state errors. Numerical simulations are performed to validate the proposed control scheme, and experiments using 400-W AC module prototype are carried out to demonstrate its practical feasibility.
Research into Kinect/Inertial Measurement Units Based on Indoor Robots.
Li, Huixia; Wen, Xi; Guo, Hang; Yu, Min
2018-03-12
As indoor mobile navigation suffers from low positioning accuracy and accumulation error, we carried out research into an integrated location system for a robot based on Kinect and an Inertial Measurement Unit (IMU). In this paper, the close-range stereo images are used to calculate the attitude information and the translation amount of the adjacent positions of the robot by means of the absolute orientation algorithm, for improving the calculation accuracy of the robot's movement. Relying on the Kinect visual measurement and the strap-down IMU devices, we also use Kalman filtering to obtain the errors of the position and attitude outputs, in order to seek the optimal estimation and correct the errors. Experimental results show that the proposed method is able to improve the positioning accuracy and stability of the indoor mobile robot.
Interval Predictor Models for Data with Measurement Uncertainty
NASA Technical Reports Server (NTRS)
Lacerda, Marcio J.; Crespo, Luis G.
2017-01-01
An interval predictor model (IPM) is a computational model that predicts the range of an output variable given input-output data. This paper proposes strategies for constructing IPMs based on semidefinite programming and sum of squares (SOS). The models are optimal in the sense that they yield an interval valued function of minimal spread containing all the observations. Two different scenarios are considered. The first one is applicable to situations where the data is measured precisely whereas the second one is applicable to data subject to known biases and measurement error. In the latter case, the IPMs are designed to fully contain regions in the input-output space where the data is expected to fall. Moreover, we propose a strategy for reducing the computational cost associated with generating IPMs as well as means to simulate them. Numerical examples illustrate the usage and performance of the proposed formulations.
Linear System Control Using Stochastic Learning Automata
NASA Technical Reports Server (NTRS)
Ziyad, Nigel; Cox, E. Lucien; Chouikha, Mohamed F.
1998-01-01
This paper explains the use of a Stochastic Learning Automata (SLA) to control switching between three systems to produce the desired output response. The SLA learns the optimal choice of the damping ratio for each system to achieve a desired result. We show that the SLA can learn these states for the control of an unknown system with the proper choice of the error criteria. The results of using a single automaton are compared to using multiple automata.
Novel multireceiver communication systems configurations based on optimal estimation theory
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1992-01-01
A novel multireceiver configuration for carrier arraying and/or signal arraying is presented. The proposed configuration is obtained by formulating the carrier and/or signal arraying problem as an optimal estimation problem, and it consists of two stages. The first stage optimally estimates various phase processes received at different receivers with coupled phase-locked loops wherein the individual loops acquire and track their respective receivers' phase processes but are aided by each other in an optimal manner via LF error signals. The proposed configuration results in the minimization of the the effective radio loss at the combiner output, and thus maximization of energy per bit to noise power spectral density ratio is achieved. A novel adaptive algorithm for the estimator of the signal model parameters when these are not known a priori is also presented.
A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.
2013-07-01
There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another,more » our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.« less
Construction of optimal resources for concatenated quantum protocols
NASA Astrophysics Data System (ADS)
Pirker, A.; Wallnöfer, J.; Briegel, H. J.; Dür, W.
2017-06-01
We consider the explicit construction of resource states for measurement-based quantum information processing. We concentrate on special-purpose resource states that are capable to perform a certain operation or task, where we consider unitary Clifford circuits as well as non-trace-preserving completely positive maps, more specifically probabilistic operations including Clifford operations and Pauli measurements. We concentrate on 1 →m and m →1 operations, i.e., operations that map one input qubit to m output qubits or vice versa. Examples of such operations include encoding and decoding in quantum error correction, entanglement purification, or entanglement swapping. We provide a general framework to construct optimal resource states for complex tasks that are combinations of these elementary building blocks. All resource states only contain input and output qubits, and are hence of minimal size. We obtain a stabilizer description of the resulting resource states, which we also translate into a circuit pattern to experimentally generate these states. In particular, we derive recurrence relations at the level of stabilizers as key analytical tool to generate explicit (graph) descriptions of families of resource states. This allows us to explicitly construct resource states for encoding, decoding, and syndrome readout for concatenated quantum error correction codes, code switchers, multiple rounds of entanglement purification, quantum repeaters, and combinations thereof (such as resource states for entanglement purification of encoded states).
Adaptive Identification of Fluid-Dynamic Systems
2001-06-14
Fig. 1. Unknown System Adaptive Filter Σ _ + Input u Filter Output y Desired Output d Error e Fig. 1. Modeling of a SISO system using...2J E e n = (12) Here [ ]. E is the expectation operator and ( ) ( ) ( ) e n d n y n= − is the error between the desired system output and...B … input vector ( ) ( ) ( ) ( )[ ], , ,1 1 Tn u n u n u n N= − − +U … output and error ( ) ( ) ( ) ( ) ( ) ( ) ( ) T T y n n n e n d n n n
An H(∞) control approach to robust learning of feedforward neural networks.
Jing, Xingjian
2011-09-01
A novel H(∞) robust control approach is proposed in this study to deal with the learning problems of feedforward neural networks (FNNs). The analysis and design of a desired weight update law for the FNN is transformed into a robust controller design problem for a discrete dynamic system in terms of the estimation error. The drawbacks of some existing learning algorithms can therefore be revealed, especially for the case that the output data is fast changing with respect to the input or the output data is corrupted by noise. Based on this approach, the optimal learning parameters can be found by utilizing the linear matrix inequality (LMI) optimization techniques to achieve a predefined H(∞) "noise" attenuation level. Several existing BP-type algorithms are shown to be special cases of the new H(∞)-learning algorithm. Theoretical analysis and several examples are provided to show the advantages of the new method. Copyright © 2011 Elsevier Ltd. All rights reserved.
Streamflow Prediction based on Chaos Theory
NASA Astrophysics Data System (ADS)
Li, X.; Wang, X.; Babovic, V. M.
2015-12-01
Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.
Optimum color filters for CCD digital cameras
NASA Astrophysics Data System (ADS)
Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl
1993-12-01
As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.
A new open-loop fiber optic gyro error compensation method based on angular velocity error modeling.
Zhang, Yanshun; Guo, Yajing; Li, Chunyu; Wang, Yixin; Wang, Zhanqing
2015-02-27
With the open-loop fiber optic gyro (OFOG) model, output voltage and angular velocity can effectively compensate OFOG errors. However, the model cannot reflect the characteristics of OFOG errors well when it comes to pretty large dynamic angular velocities. This paper puts forward a modeling scheme with OFOG output voltage u and temperature T as the input variables and angular velocity error Δω as the output variable. Firstly, the angular velocity error Δω is extracted from OFOG output signals, and then the output voltage u, temperature T and angular velocity error Δω are used as the learning samples to train a Radial-Basis-Function (RBF) neural network model. Then the nonlinear mapping model over T, u and Δω is established and thus Δω can be calculated automatically to compensate OFOG errors according to T and u. The results of the experiments show that the established model can be used to compensate the nonlinear OFOG errors. The maximum, the minimum and the mean square error of OFOG angular velocity are decreased by 97.0%, 97.1% and 96.5% relative to their initial values, respectively. Compared with the direct modeling of gyro angular velocity, which we researched before, the experimental results of the compensating method proposed in this paper are further reduced by 1.6%, 1.4% and 1.42%, respectively, so the performance of this method is better than that of the direct modeling for gyro angular velocity.
A New Open-Loop Fiber Optic Gyro Error Compensation Method Based on Angular Velocity Error Modeling
Zhang, Yanshun; Guo, Yajing; Li, Chunyu; Wang, Yixin; Wang, Zhanqing
2015-01-01
With the open-loop fiber optic gyro (OFOG) model, output voltage and angular velocity can effectively compensate OFOG errors. However, the model cannot reflect the characteristics of OFOG errors well when it comes to pretty large dynamic angular velocities. This paper puts forward a modeling scheme with OFOG output voltage u and temperature T as the input variables and angular velocity error Δω as the output variable. Firstly, the angular velocity error Δω is extracted from OFOG output signals, and then the output voltage u, temperature T and angular velocity error Δω are used as the learning samples to train a Radial-Basis-Function (RBF) neural network model. Then the nonlinear mapping model over T, u and Δω is established and thus Δω can be calculated automatically to compensate OFOG errors according to T and u. The results of the experiments show that the established model can be used to compensate the nonlinear OFOG errors. The maximum, the minimum and the mean square error of OFOG angular velocity are decreased by 97.0%, 97.1% and 96.5% relative to their initial values, respectively. Compared with the direct modeling of gyro angular velocity, which we researched before, the experimental results of the compensating method proposed in this paper are further reduced by 1.6%, 1.4% and 1.2%, respectively, so the performance of this method is better than that of the direct modeling for gyro angular velocity. PMID:25734642
Photonic crystal enhanced silicon cell based thermophotovoltaic systems
Yeng, Yi Xiang; Chan, Walker R.; Rinnerbauer, Veronika; ...
2015-01-30
We report the design, optimization, and experimental results of large area commercial silicon solar cell based thermophotovoltaic (TPV) energy conversion systems. Using global non-linear optimization tools, we demonstrate theoretically a maximum radiative heat-to-electricity efficiency of 6.4% and a corresponding output electrical power density of 0.39 W cm⁻² at temperature T = 1660 K when implementing both the optimized two-dimensional (2D) tantalum photonic crystal (PhC) selective emitter, and the optimized 1D tantalum pentoxide – silicon dioxide PhC cold-side selective filter. In addition, we have developed an experimental large area TPV test setup that enables accurate measurement of radiative heat-to-electricity efficiency formore » any emitter-filter-TPV cell combination of interest. In fact, the experimental results match extremely well with predictions of our numerical models. Our experimental setup achieved a maximum output electrical power density of 0.10W cm⁻² and radiative heat-to-electricity efficiency of 1.18% at T = 1380 K using commercial wafer size back-contacted silicon solar cells.« less
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
Simultaneous Helmert transformations among multiple frames considering all relevant measurements
NASA Astrophysics Data System (ADS)
Chang, Guobin; Lin, Peng; Bian, Hefang; Gao, Jingxiang
2018-03-01
Helmert or similarity models are widely employed to relate different coordinate frames. It is often encountered in practice to transform coordinates from more than one old frame into a new one. One may perform separate Helmert transformations for each old frame. However, although each transformation is locally optimal, this is not globally optimal. Transformations among three frames, namely one new and two old, are studied as an example. Simultaneous Helmert transformations among all frames are also studied. Least-squares estimation of the transformation parameters and the coordinates in the new frame of all stations involved is performed. A functional model for the transformations among multiple frames is developed. A realistic stochastic model is followed, in which not only non-common stations are taken into consideration, but also errors in all measurements are addressed. An algorithm of iterative linearizations and estimations is derived in detail. The proposed method is globally optimal, and, perhaps more importantly, it produces a unified network of the new frame providing coordinate estimates for all involved stations and the associated covariance matrix, with the latter being consistent with the true errors of the former. Simulations are conducted, and the results validate the superiority of the proposed combined method over separate approaches.
Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.
Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi
2016-05-30
Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Aardt, Jan; Romanczyk, Paul; van Leeuwen, Martin
Terrestrial laser scanning (TLS) has emerged as an effective tool for rapid comprehensive measurement of object structure. Registration of TLS data is an important prerequisite to overcome the limitations of occlusion. However, due to the high dissimilarity of point cloud data collected from disparate viewpoints in the forest environment, adequate marker-free registration approaches have not been developed. The majority of studies instead rely on the utilization of artificial tie points (e.g., reflective tooling balls) placed within a scene to aid in coordinate transformation. We present a technique for generating view-invariant feature descriptors that are intrinsic to the point cloud datamore » and, thus, enable blind marker-free registration in forest environments. To overcome the limitation of initial pose estimation, we employ a voting method to blindly determine the optimal pairwise transformation parameters, without an a priori estimate of the initial sensor pose. To provide embedded error metrics, we developed a set theory framework in which a circular transformation is traversed between disjoint tie point subsets. This provides an upper estimate of the Root Mean Square Error (RMSE) confidence associated with each pairwise transformation. Output RMSE errors are commensurate with the RMSE of input tie points locations. Thus, while the mean output RMSE=16.3cm, improved results could be achieved with a more precise laser scanning system. This study 1) quantifies the RMSE of the proposed marker-free registration approach, 2) assesses the validity of embedded confidence metrics using receiver operator characteristic (ROC) curves, and 3) informs optimal sample spacing considerations for TLS data collection in New England forests. Furthermore, while the implications for rapid, accurate, and precise forest inventory are obvious, the conceptual framework outlined here could potentially be extended to built environments.« less
Van Aardt, Jan; Romanczyk, Paul; van Leeuwen, Martin; ...
2016-04-04
Terrestrial laser scanning (TLS) has emerged as an effective tool for rapid comprehensive measurement of object structure. Registration of TLS data is an important prerequisite to overcome the limitations of occlusion. However, due to the high dissimilarity of point cloud data collected from disparate viewpoints in the forest environment, adequate marker-free registration approaches have not been developed. The majority of studies instead rely on the utilization of artificial tie points (e.g., reflective tooling balls) placed within a scene to aid in coordinate transformation. We present a technique for generating view-invariant feature descriptors that are intrinsic to the point cloud datamore » and, thus, enable blind marker-free registration in forest environments. To overcome the limitation of initial pose estimation, we employ a voting method to blindly determine the optimal pairwise transformation parameters, without an a priori estimate of the initial sensor pose. To provide embedded error metrics, we developed a set theory framework in which a circular transformation is traversed between disjoint tie point subsets. This provides an upper estimate of the Root Mean Square Error (RMSE) confidence associated with each pairwise transformation. Output RMSE errors are commensurate with the RMSE of input tie points locations. Thus, while the mean output RMSE=16.3cm, improved results could be achieved with a more precise laser scanning system. This study 1) quantifies the RMSE of the proposed marker-free registration approach, 2) assesses the validity of embedded confidence metrics using receiver operator characteristic (ROC) curves, and 3) informs optimal sample spacing considerations for TLS data collection in New England forests. Furthermore, while the implications for rapid, accurate, and precise forest inventory are obvious, the conceptual framework outlined here could potentially be extended to built environments.« less
Discrete distributed strain sensing of intelligent structures
NASA Technical Reports Server (NTRS)
Anderson, Mark S.; Crawley, Edward F.
1992-01-01
Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.
Optimal design of a microgripper-type actuator based on AlN/Si heterogeneous bimorph
NASA Astrophysics Data System (ADS)
Ruiz, D.; Díaz-Molina, A.; Sigmund, O.; Donoso, A.; Bellido, J. C.; Sánchez-Rojas, J. L.
2017-05-01
This work presents a systematic procedure to design piezoelectrically actuated microgrippers. Topology optimization combined with optimal design of electrodes is used to maximize the displacement at the output port of the gripper. The fabrication at the microscale leads us to overcome an important issue: the difficulty of placing a piezoelectric film on both top and bottom of the host layer. Due to the non-symmetric lamination of the structure, an out-of-plane bending spoils the behaviour of the gripper. Suppression of this out-of-plane deformation is the main novelty introduced. In addition, a robust formulation approach is used in order to control the length scale in the whole domain and to reduce sensitivity of the designs to small manufacturing errors.
NASA Astrophysics Data System (ADS)
Wong, C.-W.; Yew, T.-K.; Chong, K.-K.; Tan, W.-C.; Tan, M.-H.; Lim, B.-H.
2017-11-01
This paper presents a systematic approach for optimizing the design of ultra-high concentrator photovoltaic (UHCPV) system comprised of non-imaging dish concentrator (primary optical element) and crossed compound parabolic concentrator (secondary optical element). The optimization process includes the design of primary and secondary optics by considering the focal distance, spillage losses and rim angle of the dish concentrator. The imperfection factors, i.e. mirror reflectivity of 93%, lens’ optical efficiency of 85%, circumsolar ratio of 0.2 and mirror surface slope error of 2 mrad, were considered in the simulation to avoid the overestimation of output power. The proposed UHCPV system is capable of attaining effective ultra-high solar concentration ratio of 1475 suns and DC system efficiency of 31.8%.
Optimized scheduling technique of null subcarriers for peak power control in 3GPP LTE downlink.
Cho, Soobum; Park, Sang Kyu
2014-01-01
Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system.
Optimized Scheduling Technique of Null Subcarriers for Peak Power Control in 3GPP LTE Downlink
Park, Sang Kyu
2014-01-01
Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system. PMID:24883376
Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.
Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less
Smart photodetector arrays for error control in page-oriented optical memory
NASA Astrophysics Data System (ADS)
Schaffer, Maureen Elizabeth
1998-12-01
Page-oriented optical memories (POMs) have been proposed to meet high speed, high capacity storage requirements for input/output intensive computer applications. This technology offers the capability for storage and retrieval of optical data in two-dimensional pages resulting in high throughput data rates. Since currently measured raw bit error rates for these systems fall several orders of magnitude short of industry requirements for binary data storage, powerful error control codes must be adopted. These codes must be designed to take advantage of the two-dimensional memory output. In addition, POMs require an optoelectronic interface to transfer the optical data pages to one or more electronic host systems. Conventional charge coupled device (CCD) arrays can receive optical data in parallel, but the relatively slow serial electronic output of these devices creates a system bottleneck thereby eliminating the POM advantage of high transfer rates. Also, CCD arrays are "unintelligent" interfaces in that they offer little data processing capabilities. The optical data page can be received by two-dimensional arrays of "smart" photo-detector elements that replace conventional CCD arrays. These smart photodetector arrays (SPAs) can perform fast parallel data decoding and error control, thereby providing an efficient optoelectronic interface between the memory and the electronic computer. This approach optimizes the computer memory system by combining the massive parallelism and high speed of optics with the diverse functionality, low cost, and local interconnection efficiency of electronics. In this dissertation we examine the design of smart photodetector arrays for use as the optoelectronic interface for page-oriented optical memory. We review options and technologies for SPA fabrication, develop SPA requirements, and determine SPA scalability constraints with respect to pixel complexity, electrical power dissipation, and optical power limits. Next, we examine data modulation and error correction coding for the purpose of error control in the POM system. These techniques are adapted, where possible, for 2D data and evaluated as to their suitability for a SPA implementation in terms of BER, code rate, decoder time and pixel complexity. Our analysis shows that differential data modulation combined with relatively simple block codes known as array codes provide a powerful means to achieve the desired data transfer rates while reducing error rates to industry requirements. Finally, we demonstrate the first smart photodetector array designed to perform parallel error correction on an entire page of data and satisfy the sustained data rates of page-oriented optical memories. Our implementation integrates a monolithic PN photodiode array and differential input receiver for optoelectronic signal conversion with a cluster error correction code using 0.35-mum CMOS. This approach provides high sensitivity, low electrical power dissipation, and fast parallel correction of 2 x 2-bit cluster errors in an 8 x 8 bit code block to achieve corrected output data rates scalable to 102 Gbps in the current technology increasing to 1.88 Tbps in 0.1-mum CMOS.
A Novel Extreme Learning Control Framework of Unmanned Surface Vehicles.
Wang, Ning; Sun, Jing-Chao; Er, Meng Joo; Liu, Yan-Cheng
2016-05-01
In this paper, an extreme learning control (ELC) framework using the single-hidden-layer feedforward network (SLFN) with random hidden nodes for tracking an unmanned surface vehicle suffering from unknown dynamics and external disturbances is proposed. By combining tracking errors with derivatives, an error surface and transformed states are defined to encapsulate unknown dynamics and disturbances into a lumped vector field of transformed states. The lumped nonlinearity is further identified accurately by an extreme-learning-machine-based SLFN approximator which does not require a priori system knowledge nor tuning input weights. Only output weights of the SLFN need to be updated by adaptive projection-based laws derived from the Lyapunov approach. Moreover, an error compensator is incorporated to suppress approximation residuals, and thereby contributing to the robustness and global asymptotic stability of the closed-loop ELC system. Simulation studies and comprehensive comparisons demonstrate that the ELC framework achieves high accuracy in both tracking and approximation.
High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.
Wang, Fei; Xie, Zhaoxin; Chen, Zuo
2014-01-01
Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.
Consensus of satellite cluster flight using an energy-matching optimal control method
NASA Astrophysics Data System (ADS)
Luo, Jianjun; Zhou, Liang; Zhang, Bo
2017-11-01
This paper presents an optimal control method for consensus of satellite cluster flight under a kind of energy matching condition. Firstly, the relation between energy matching and satellite periodically bounded relative motion is analyzed, and the satellite energy matching principle is applied to configure the initial conditions. Then, period-delayed errors are adopted as state variables to establish the period-delayed errors dynamics models of a single satellite and the cluster. Next a novel satellite cluster feedback control protocol with coupling gain is designed, so that the satellite cluster periodically bounded relative motion consensus problem (period-delayed errors state consensus problem) is transformed to the stability of a set of matrices with the same low dimension. Based on the consensus region theory in the research of multi-agent system consensus issues, the coupling gain can be obtained to satisfy the requirement of consensus region and decouple the satellite cluster information topology and the feedback control gain matrix, which can be determined by Linear quadratic regulator (LQR) optimal method. This method can realize the consensus of satellite cluster period-delayed errors, leading to the consistency of semi-major axes (SMA) and the energy-matching of satellite cluster. Then satellites can emerge the global coordinative cluster behavior. Finally the feasibility and effectiveness of the present energy-matching optimal consensus for satellite cluster flight is verified through numerical simulations.
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
Gopal, S; Do, T; Pooni, J S; Martinelli, G
2014-03-01
The Mostcare monitor is a non-invasive cardiac output monitor. It has been well validated in cardiac surgical patients but there is limited evidence on its use in patients with severe sepsis and septic shock. The study included the first 22 consecutive patients with severe sepsis and septic shock in whom the floatation of a pulmonary artery catheter was deemed necessary to guide clinical management. Cardiac output measurements including cardiac output, cardiac index and stroke volume were simultaneously calculated and recorded from a thermodilution pulmonary artery catheter and from the Mostcare monitor respectively. The two methods of measuring cardiac output were compared by Bland-Altman statistics and linear regression analysis. A percentage error of less than 30% was defined as acceptable for this study. Bland-Altman analysis for cardiac output showed a Bias of 0.31 L.min-1, precision (=SD) of 1.97 L.min-1 and a percentage error of 62.54%. For Cardiac Index the bias was 0.21 L.min-1.m-2, precision of 1.10 L.min-1.m-2 and a percentage error of 64%. For stroke volume the bias was 5 mL, precision of 24.46 mL and percentage error of 70.21%. Linear regression produced a correlation coefficient r2 for cardiac output, cardiac index, and stroke volume, of 0.403, 0.306, and 0.3 respectively. Compared to thermodilution cardiac output, cardiac output studies obtained from the Mostcare monitor have an unacceptably high error rate. The Mostcare monitor demonstrated to be an unreliable monitoring device to measure cardiac output in patients with severe sepsis and septic shock on an intensive care unit.
Simplified planar model of a car steering system with rack and pinion and McPherson suspension
NASA Astrophysics Data System (ADS)
Knapczyk, J.; Kucybała, P.
2016-09-01
The paper presents the analysis and optimization of steering system with rack and pinion and McPherson suspension using spatial model and equivalent simplified planar model. The dimension of the steering linkage that give minimum steering error can be estimated using planar model. The steering error is defined as the difference between the actual angle made by the outer front wheel during steering manoeuvers and the calculated angle for the same wheel based on the Ackerman principle. For a given linear rack displacement, a specified steering arms angular displacements are determined while simultaneously ensuring best transmission angle characteristics (i) without and (ii) with imposing linear correlation between input and output. Numerical examples are used to illustrate the proposed method.
Theory and design of interferometric synthetic aperture radars
NASA Technical Reports Server (NTRS)
Rodriguez, E.; Martin, J. M.
1992-01-01
A derivation of the signal statistics, an optimal estimator of the interferometric phase, and the expression necessary to calculate the height-error budget are presented. These expressions are used to derive methods of optimizing the parameters of the interferometric synthetic aperture radar system (InSAR), and are then employed in a specific design example for a system to perform high-resolution global topographic mapping with a one-year mission lifetime, subject to current technological constraints. A Monte Carlo simulation of this InSAR system is performed to evaluate its performance for realistic topography. The results indicate that this system has the potential to satisfy the stringent accuracy and resolution requirements for geophysical use of global topographic data.
Klous, Miriam; Klous, Sander
2010-07-01
The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain-method is 3.0+/-0.2 times lower than with the Lu-method when skin movement artifacts are simulated by applying a continuous noise model. When simulating systematic errors in joint constraints, the number of iterations for the chain-method was almost a factor 5 lower than the number of iterations for the Lu-method. However, the Lu-method performs slightly better than the chain-method. The RMSD value between the reconstructed and actual marker positions is approximately 57% of the systematic error on the joint center positions for the Lu-method compared with 59% for the chain-method.
Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor
2012-01-01
A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371
Impact of device level faults in a digital avionic processor
NASA Technical Reports Server (NTRS)
Suk, Ho Kim
1989-01-01
This study describes an experimental analysis of the impact of gate and device-level faults in the processor of a Bendix BDX-930 flight control system. Via mixed mode simulation, faults were injected at the gate (stuck-at) and at the transistor levels and, their propagation through the chip to the output pins was measured. The results show that there is little correspondence between a stuck-at and a device-level fault model, as far as error activity or detection within a functional unit is concerned. In so far as error activity outside the injected unit and at the output pins are concerned, the stuck-at and device models track each other. The stuck-at model, however, overestimates, by over 100 percent, the probability of fault propagation to the output pins. An evaluation of the Mean Error Durations and the Mean Time Between Errors at the output pins shows that the stuck-at model significantly underestimates (by 62 percent) the impact of an internal chip fault on the output pins. Finally, the study also quantifies the impact of device fault by location, both internally and at the output pins.
EXPLICIT LEAST-DEGREE BOUNDARY FILTERS FOR DISCONTINUOUS GALERKIN.
Nguyen, Dang-Manh; Peters, Jörg
2017-01-01
Convolving the output of Discontinuous Galerkin (DG) computations using spline filters can improve both smoothness and accuracy of the output. At domain boundaries, these filters have to be one-sided for non-periodic boundary conditions. Recently, position-dependent smoothness-increasing accuracy-preserving (PSIAC) filters were shown to be a superset of the well-known one-sided RLKV and SRV filters. Since PSIAC filters can be formulated symbolically, PSIAC filtering amounts to forming linear products with local DG output and so offers a more stable and efficient implementation. The paper introduces a new class of PSIAC filters NP 0 that have small support and are piecewise constant. Extensive numerical experiments for the canonical hyperbolic test equation show NP 0 filters outperform the more complex known boundary filters. NP 0 filters typically reduce the L ∞ error in the boundary region below that of the interior where optimally superconvergent symmetric filters of the same support are applied. NP 0 filtering can be implemented as forming linear combinations of the data with short rational weights. Exact derivatives of the convolved output are easy to compute.
EXPLICIT LEAST-DEGREE BOUNDARY FILTERS FOR DISCONTINUOUS GALERKIN*
Nguyen, Dang-Manh; Peters, Jörg
2017-01-01
Convolving the output of Discontinuous Galerkin (DG) computations using spline filters can improve both smoothness and accuracy of the output. At domain boundaries, these filters have to be one-sided for non-periodic boundary conditions. Recently, position-dependent smoothness-increasing accuracy-preserving (PSIAC) filters were shown to be a superset of the well-known one-sided RLKV and SRV filters. Since PSIAC filters can be formulated symbolically, PSIAC filtering amounts to forming linear products with local DG output and so offers a more stable and efficient implementation. The paper introduces a new class of PSIAC filters NP0 that have small support and are piecewise constant. Extensive numerical experiments for the canonical hyperbolic test equation show NP0 filters outperform the more complex known boundary filters. NP0 filters typically reduce the L∞ error in the boundary region below that of the interior where optimally superconvergent symmetric filters of the same support are applied. NP0 filtering can be implemented as forming linear combinations of the data with short rational weights. Exact derivatives of the convolved output are easy to compute. PMID:29081643
Control and optimization system
Xinsheng, Lou
2013-02-12
A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
NASA Astrophysics Data System (ADS)
Altenau, E. H.; Pavelsky, T.; Andreadis, K.; Bates, P. D.; Neal, J. C.
2017-12-01
Multichannel rivers continue to be challenging features to quantify, especially at regional and global scales, which is problematic because accurate representations of such environments are needed to properly monitor the earth's water cycle as it adjusts to climate change. It has been demonstrated that higher-complexity, 2D models outperform lower-complexity, 1D models in simulating multichannel river hydraulics at regional scales due to the inclusion of the channel network's connectivity. However, new remote sensing measurements from the future Surface Water and Ocean Topography (SWOT) mission and it's airborne analog AirSWOT offer new observations that can be used to try and improve the lower-complexity, 1D models to achieve accuracies closer to the higher-complexity, 2D codes. Here, we use an Ensemble Kalman Filter (EnKF) to assimilate AirSWOT water surface elevation (WSE) measurements from a 2015 field campaign into a 1D hydrodynamic model along a 90 km reach of Tanana River, AK. This work is the first to test data assimilation methods using real SWOT-like data from AirSWOT. Additionally, synthetic SWOT observations of WSE are generated across the same study site using a fine-resolution 2D model and assimilated into the coarser-resolution 1D model. Lastly, we compare the abilities of AirSWOT and the synthetic-SWOT observations to improve spatial and temporal model outputs in WSEs. Results indicate 1D model outputs of spatially distributed WSEs improve as observational coverage increases, and improvements in temporal fluctuations in WSEs depend on the number of observations. Furthermore, results reveal that assimilation of AirSWOT observations produce greater error reductions in 1D model outputs compared to synthetic SWOT observations due to lower measurement errors. Both AirSWOT and the synthetic SWOT observations significantly lower spatial and temporal errors in 1D model outputs of WSEs.
Tracking historical increases in nitrogen-driven crop production possibilities
NASA Astrophysics Data System (ADS)
Mueller, N. D.; Lassaletta, L.; Billen, G.; Garnier, J.; Gerber, J. S.
2015-12-01
The environmental costs of nitrogen use have prompted a focus on improving the efficiency of nitrogen use in the global food system, the primary source of nitrogen pollution. Typical approaches to improving agricultural nitrogen use efficiency include more targeted field-level use (timing, placement, and rate) and modification of the crop mix. However, global efficiency gains can also be achieved by improving the spatial allocation of nitrogen between regions or countries, due to consistent diminishing returns at high nitrogen use. This concept is examined by constructing a tradeoff frontier (or production possibilities frontier) describing global crop protein yield as a function of applied nitrogen from all sources, given optimal spatial allocation. Yearly variation in country-level input-output nitrogen budgets are utilized to parameterize country-specific hyperbolic yield-response models. Response functions are further characterized for three ~15-year eras beginning in 1961, and series of calculations uses these curves to simulate optimal spatial allocation in each era and determine the frontier. The analyses reveal that excess nitrogen (in recent years) could be reduced by ~40% given optimal spatial allocation. Over time, we find that gains in yield potential and in-country nitrogen use efficiency have led to increases in the global nitrogen production possibilities frontier. However, this promising shift has been accompanied by an actual spatial distribution of nitrogen use that has become less optimal, in an absolute sense, relative to the frontier. We conclude that examination of global production possibilities is a promising approach to understanding production constraints and efficiency opportunities in the global food system.
Shape optimization techniques for musical instrument design
NASA Astrophysics Data System (ADS)
Henrique, Luis; Antunes, Jose; Carvalho, Joao S.
2002-11-01
The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.
NASA Astrophysics Data System (ADS)
Hagemann, Stefan; Chen, Cui; Haerter, Jan O.; Gerten, Dieter; Heinke, Jens; Piani, Claudio
2010-05-01
Future climate model scenarios depend crucially on their adequate representation of the hydrological cycle. Within the European project "Water and Global Change" (WATCH) special care is taken to couple state-of-the-art climate model output to a suite of hydrological models. This coupling is expected to lead to a better assessment of changes in the hydrological cycle. However, due to the systematic model errors of climate models, their output is often not directly applicable as input for hydrological models. Thus, the methodology of a statistical bias correction has been developed, which can be used for correcting climate model output to produce internally consistent fields that have the same statistical intensity distribution as the observations. As observations, global re-analysed daily data of precipitation and temperature are used that are obtained in the WATCH project. We will apply the bias correction to global climate model data of precipitation and temperature from the GCMs ECHAM5/MPIOM, CNRM-CM3 and LMDZ-4, and intercompare the bias corrected data to the original GCM data and the observations. Then, the orginal and the bias corrected GCM data will be used to force two global hydrology models: (1) the hydrological model of the Max Planck Institute for Meteorology (MPI-HM) consisting of the Simplified Land surface (SL) scheme and the Hydrological Discharge (HD) model, and (2) the dynamic vegetation model LPJmL operated by the Potsdam Institute for Climate Impact Research. The impact of the bias correction on the projected simulated hydrological changes will be analysed, and the resulting behaviour of the two hydrology models will be compared.
Rafieerad, A R; Bushroa, A R; Nasiri-Tabrizi, B; Kaboli, S H A; Khanahmadi, S; Amiri, Ahmad; Vadivelu, J; Yusof, F; Basirun, W J; Wasa, K
2017-05-01
Recently, the robust optimization and prediction models have been highly noticed in district of surface engineering and coating techniques to obtain the highest possible output values through least trial and error experiments. Besides, due to necessity of finding the optimum value of dependent variables, the multi-objective metaheuristic models have been proposed to optimize various processes. Herein, oriented mixed oxide nanotubular arrays were grown on Ti-6Al-7Nb (Ti67) implant using physical vapor deposition magnetron sputtering (PVDMS) designed by Taguchi and following electrochemical anodization. The obtained adhesion strength and hardness of Ti67/Nb were modeled by particle swarm optimization (PSO) to predict the outputs performance. According to developed models, multi-objective PSO (MOPSO) run aimed at finding PVDMS inputs to maximize current outputs simultaneously. The provided sputtering parameters were applied as validation experiment and resulted in higher adhesion strength and hardness of interfaced layer with Ti67. The as-deposited Nb layer before and after optimization were anodized in fluoride-base electrolyte for 300min. To crystallize the coatings, the anodically grown mixed oxide TiO 2 -Nb 2 O 5 -Al 2 O 3 nanotubes were annealed at 440°C for 30min. From the FESEM observations, the optimized adhesive Nb interlayer led to further homogeneity of mixed nanotube arrays. As a result of this surface modification, the anodized sample after annealing showed the highest mechanical, tribological, corrosion resistant and in-vitro bioactivity properties, where a thick bone-like apatite layer was formed on the mixed oxide nanotubes surface within 10 days immersion in simulated body fluid (SBF) after applied MOPSO. The novel results of this study can be effective in optimizing a variety of the surface properties of the nanostructured implants. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng
2016-06-01
The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-01
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-07
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.
Optimal information transfer in enzymatic networks: A field theoretic formulation
NASA Astrophysics Data System (ADS)
Samanta, Himadri S.; Hinczewski, Michael; Thirumalai, D.
2017-07-01
Signaling in enzymatic networks is typically triggered by environmental fluctuations, resulting in a series of stochastic chemical reactions, leading to corruption of the signal by noise. For example, information flow is initiated by binding of extracellular ligands to receptors, which is transmitted through a cascade involving kinase-phosphatase stochastic chemical reactions. For a class of such networks, we develop a general field-theoretic approach to calculate the error in signal transmission as a function of an appropriate control variable. Application of the theory to a simple push-pull network, a module in the kinase-phosphatase cascade, recovers the exact results for error in signal transmission previously obtained using umbral calculus [Hinczewski and Thirumalai, Phys. Rev. X 4, 041017 (2014), 10.1103/PhysRevX.4.041017]. We illustrate the generality of the theory by studying the minimal errors in noise reduction in a reaction cascade with two connected push-pull modules. Such a cascade behaves as an effective three-species network with a pseudointermediate. In this case, optimal information transfer, resulting in the smallest square of the error between the input and output, occurs with a time delay, which is given by the inverse of the decay rate of the pseudointermediate. Surprisingly, in these examples the minimum error computed using simulations that take nonlinearities and discrete nature of molecules into account coincides with the predictions of a linear theory. In contrast, there are substantial deviations between simulations and predictions of the linear theory in error in signal propagation in an enzymatic push-pull network for a certain range of parameters. Inclusion of second-order perturbative corrections shows that differences between simulations and theoretical predictions are minimized. Our study establishes that a field theoretic formulation of stochastic biological signaling offers a systematic way to understand error propagation in networks of arbitrary complexity.
Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.
NASA Astrophysics Data System (ADS)
Gubler, S.; Gruber, S.; Purves, R. S.
2012-06-01
As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR) and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR). In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB) stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM) in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night. We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD) and the relative root mean squared deviance (RMSD) of the clear-sky global SDR scatter between between -2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations to local conditions reduces MBD and RMSD strongly compared to using the published values of the parameters, resulting in relative MBD and RMSD of less than 5% respectively 10% for the best parameterizations. The best results to estimate cloud transmissivity during nighttime were obtained by linearly interpolating the average of the cloud transmissivity of the four hours of the preceeding afternoon and the following morning. Model uncertainty can be caused by different errors such as code implementation, errors in input data and in estimated parameters, etc. The influence of the latter (errors in input data and model parameter uncertainty) on model outputs is determined using Monte Carlo. Model uncertainty is provided as the relative standard deviation σrel of the simulated frequency distributions of the model outputs. An optimistic estimate of the relative uncertainty σrel resulted in 10% for the clear-sky direct, 30% for diffuse, 3% for global SDR, and 3% for the fitted all-sky LDR.
DEEM, a versatile platform of FRD measurement for highly multiplexed fibre systems in astronomy
NASA Astrophysics Data System (ADS)
Yan, Yunxiang; Yan, Qi; Wang, Gang; Sun, Weimin; Luo, A.-Li; Ma, Zhenyu; Zhang, Qiong; Li, Jian; Wang, Shuqing
2018-06-01
We present a new method of DEEM, the direct energy encircling method, for characterizing the performance of fibres in most astronomical spectroscopic applications. It is a versatile platform to measure focal ratio degradation (FRD), throughput, and point spread function. The principle of DEEM and the relation between the encircled energy and the spot size were derived and simulated based on the power distribution model (PDM). We analysed the errors of DEEM and pointed out the major error source for better understanding and optimization. The validation of DEEM has been confirmed by comparing the results with conventional method which shows that DEEM has good robustness with high accuracy in both stable and complex experiment environments. Applications on the integral field unit (IFU) show that the FRD of 50 μm core fibre is substandard for the requirement which requires the output focal ratio to be slower than 4.5. The homogeneity of throughput is acceptable and higher than 85 per cent. The prototype IFU of the first generation helps to find out the imperfections to optimize the new design of the next generation based on the staggered structure with 35 μm core fibres of N.A. = 0.12, which can improve the FRD performance. The FRD dependence on wavelength and core size is revealed that higher output focal ratio occurs at shorter wavelengths for large core fibres, which is in agreement with the prediction of PDM. But the dependence of the observed data is weaker than the prediction.
Duarte, Belmiro P.M.; Wong, Weng Kee; Atkinson, Anthony C.
2016-01-01
T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization. PMID:27330230
Duarte, Belmiro P M; Wong, Weng Kee; Atkinson, Anthony C
2015-03-01
T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization.
Wind power error estimation in resource assessments.
Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.
Wind Power Error Estimation in Resource Assessments
Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
Measures of rowing performance.
Smith, T Brett; Hopkins, Will G
2012-04-01
Accurate measures of performance are important for assessing competitive athletes in practi~al and research settings. We present here a review of rowing performance measures, focusing on the errors in these measures and the implications for testing rowers. The yardstick for assessing error in a performance measure is the random variation (typical or standard error of measurement) in an elite athlete's competitive performance from race to race: ∼1.0% for time in 2000 m rowing events. There has been little research interest in on-water time trials for assessing rowing performance, owing to logistic difficulties and environmental perturbations in performance time with such tests. Mobile ergometry via instrumented oars or rowlocks should reduce these problems, but the associated errors have not yet been reported. Measurement of boat speed to monitor on-water training performance is common; one device based on global positioning system (GPS) technology contributes negligible extra random error (0.2%) in speed measured over 2000 m, but extra error is substantial (1-10%) with other GPS devices or with an impeller, especially over shorter distances. The problems with on-water testing have led to widespread use of the Concept II rowing ergometer. The standard error of the estimate of on-water 2000 m time predicted by 2000 m ergometer performance was 2.6% and 7.2% in two studies, reflecting different effects of skill, body mass and environment in on-water versus ergometer performance. However, well trained rowers have a typical error in performance time of only ∼0.5% between repeated 2000 m time trials on this ergometer, so such trials are suitable for tracking changes in physiological performance and factors affecting it. Many researchers have used the 2000 m ergometer performance time as a criterion to identify other predictors of rowing performance. Standard errors of the estimate vary widely between studies even for the same predictor, but the lowest errors (~1-2%) have been observed for peak power output in an incremental test, some measures of lactate threshold and measures of 30-second all-out power. Some of these measures also have typical error between repeated tests suitably low for tracking changes. Combining measures via multiple linear regression needs further investigation. In summary, measurement of boat speed, especially with a good GPS device, has adequate precision for monitoring training performance, but adjustment for environmental effects needs to be investigated. Time trials on the Concept II ergometer provide accurate estimates of a rower's physiological ability to output power, and some submaximal and brief maximal ergometer performance measures can be used frequently to monitor changes in this ability. On-water performance measured via instrumented skiffs that determine individual power output may eventually surpass measures derived from the Concept II.
Yang, Xiao-Xing; Critchley, Lester A; Joynt, Gavin M
2011-01-01
Thermodilution cardiac output using a pulmonary artery catheter is the reference method against which all new methods of cardiac output measurement are judged. However, thermodilution lacks precision and has a quoted precision error of ± 20%. There is uncertainty about its true precision and this causes difficulty when validating new cardiac output technology. Our aim in this investigation was to determine the current precision error of thermodilution measurements. A test rig through which water circulated at different constant rates with ports to insert catheters into a flow chamber was assembled. Flow rate was measured by an externally placed transonic flowprobe and meter. The meter was calibrated by timed filling of a cylinder. Arrow and Edwards 7Fr thermodilution catheters, connected to a Siemens SC9000 cardiac output monitor, were tested. Thermodilution readings were made by injecting 5 mL of ice-cold water. Precision error was divided into random and systematic components, which were determined separately. Between-readings (random) variability was determined for each catheter by taking sets of 10 readings at different flow rates. Coefficient of variation (CV) was calculated for each set and averaged. Between-catheter systems (systematic) variability was derived by plotting calibration lines for sets of catheters. Slopes were used to estimate the systematic component. Performances of 3 cardiac output monitors were compared: Siemens SC9000, Siemens Sirecust 1261, and Philips MP50. Five Arrow and 5 Edwards catheters were tested using the Siemens SC9000 monitor. Flow rates between 0.7 and 7.0 L/min were studied. The CV (random error) for Arrow was 5.4% and for Edwards was 4.8%. The random precision error was ± 10.0% (95% confidence limits). CV (systematic error) was 5.8% and 6.0%, respectively. The systematic precision error was ± 11.6%. The total precision error of a single thermodilution reading was ± 15.3% and ± 13.0% for triplicate readings. Precision error increased by 45% when using the Sirecust monitor and 100% when using the Philips monitor. In vitro testing of pulmonary artery catheters enabled us to measure both the random and systematic error components of thermodilution cardiac output measurement, and thus calculate the precision error. Using the Siemens monitor, we established a precision error of ± 15.3% for single and ± 13.0% for triplicate reading, which was similar to the previous estimate of ± 20%. However, this precision error was significantly worsened by using the Sirecust and Philips monitors. Clinicians should recognize that the precision error of thermodilution cardiac output is dependent on the selection of catheter and monitor model.
SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER
NASA Technical Reports Server (NTRS)
Scotti, S. J.
1994-01-01
SOL is a computer language which is geared to solving design problems. SOL includes the mathematical modeling and logical capabilities of a computer language like FORTRAN but also includes the additional power of non-linear mathematical programming methods (i.e. numerical optimization) at the language level (as opposed to the subroutine level). The language-level use of optimization has several advantages over the traditional, subroutine-calling method of using an optimizer: first, the optimization problem is described in a concise and clear manner which closely parallels the mathematical description of optimization; second, a seamless interface is automatically established between the optimizer subroutines and the mathematical model of the system being optimized; third, the results of an optimization (objective, design variables, constraints, termination criteria, and some or all of the optimization history) are output in a form directly related to the optimization description; and finally, automatic error checking and recovery from an ill-defined system model or optimization description is facilitated by the language-level specification of the optimization problem. Thus, SOL enables rapid generation of models and solutions for optimum design problems with greater confidence that the problem is posed correctly. The SOL compiler takes SOL-language statements and generates the equivalent FORTRAN code and system calls. Because of this approach, the modeling capabilities of SOL are extended by the ability to incorporate existing FORTRAN code into a SOL program. In addition, SOL has a powerful MACRO capability. The MACRO capability of the SOL compiler effectively gives the user the ability to extend the SOL language and can be used to develop easy-to-use shorthand methods of generating complex models and solution strategies. The SOL compiler provides syntactic and semantic error-checking, error recovery, and detailed reports containing cross-references to show where each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.
Groundwater Pollution Source Identification using Linked ANN-Optimization Model
NASA Astrophysics Data System (ADS)
Ayaz, Md; Srivastava, Rajesh; Jain, Ashu
2014-05-01
Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.
NASA Technical Reports Server (NTRS)
Tsaoussi, Lucia S.; Koblinsky, Chester J.
1994-01-01
In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in the velocity field are smallest in midlatitude regions. For both variables the largest errors caused by uncertainty in the geoid. More accurate representations of the geoid await a dedicated geopotential satellite mission. Substantial improvements in the accuracy of ocean tide models are expected in the very near future from research with TOPEX/POSEIDON data.
Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.
Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong
2016-01-01
In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.
Optimum Design of Aerospace Structural Components Using Neural Networks
NASA Technical Reports Server (NTRS)
Berke, L.; Patnaik, S. N.; Murthy, P. L. N.
1993-01-01
The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires a trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network using the code NETS. Optimum designs for new design conditions were predicted using the trained network. Neural net prediction of optimum designs was found to be satisfactory for the majority of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.
Pranata, Adrian; Perraton, Luke; El-Ansary, Doa; Clark, Ross; Fortin, Karine; Dettmann, Tim; Brandham, Robert; Bryant, Adam
2017-07-01
The ability to control lumbar extensor force output is necessary for daily activities. However, it is unknown whether this ability is impaired in chronic low back pain patients. Similarly, it is unknown whether lumbar extensor force control is related to the disability levels of chronic low back pain patients. Thirty-three chronic low back pain and 20 healthy people performed lumbar extension force-matching task where they increased and decreased their force output to match a variable target force within 20%-50% maximal voluntary isometric contraction. Force control was quantified as the root-mean-square-error between participants' force output and target force across the entire, during the increasing and decreasing portions of the force curve. Within- and between-group differences in force-matching error and the relationship between back pain group's force-matching results and their Oswestry Disability Index scores were assessed using ANCOVA and linear regression respectively. Back pain group demonstrated more overall force-matching error (mean difference=1.60 [0.78, 2.43], P<0.01) and more force-matching error while increasing force output (mean difference=2.19 [1.01, 3.37], P<0.01) than control group. The back pain group demonstrated more force-matching error while increasing than decreasing force output (mean difference=1.74, P<0.001, 95%CI [0.87, 2.61]). A unit increase in force-matching error while decreasing force output is associated with a 47% increase in Oswestry score in back pain group (R 2 =0.19, P=0.006). Lumbar extensor muscle force control is compromised in chronic low back pain patients. Force-matching error predicts disability, confirming the validity of our force control protocol for chronic low back pain patients. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pasler, Marlies; Michel, Kilian; Marrazzo, Livia; Obenland, Michael; Pallotta, Stefania; Björnsgard, Mari; Lutterbach, Johannes
2017-09-01
The purpose of this study was to characterize a new single large-area ionization chamber, the integral quality monitor system (iRT, Germany), for online and real-time beam monitoring. Signal stability, monitor unit (MU) linearity and dose rate dependence were investigated for static and arc deliveries and compared to independent ionization chamber measurements. The dose verification capability of the transmission detector system was evaluated by comparing calculated and measured detector signals for 15 volumetric modulated arc therapy plans. The error detection sensitivity was tested by introducing MLC position and linac output errors. Deviations in dose distributions between the original and error-induced plans were compared in terms of detector signal deviation, dose-volume histogram (DVH) metrics and 2D γ-evaluation (2%/2 mm and 3%/3 mm). The detector signal is linearly dependent on linac output and shows negligible (<0.4%) dose rate dependence up to 460 MU min -1 . Signal stability is within 1% for cumulative detector output; substantial variations were observed for the segment-by-segment signal. Calculated versus measured cumulative signal deviations ranged from -0.16%-2.25%. DVH, mean 2D γ-value and detector signal evaluations showed increasing deviations with regard to the respective reference with growing MLC and dose output errors; good correlation between DVH metrics and detector signal deviation was found (e.g. PTV D mean : R 2 = 0.97). Positional MLC errors of 1 mm and errors in linac output of 2% were identified with the transmission detector system. The extensive tests performed in this investigation show that the new transmission detector provides a stable and sensitive cumulative signal output and is suitable for beam monitoring during patient treatment.
NASA Astrophysics Data System (ADS)
Pasler, Marlies; Michel, Kilian; Marrazzo, Livia; Obenland, Michael; Pallotta, Stefania; Björnsgard, Mari; Lutterbach, Johannes
2017-09-01
The purpose of this study was to characterize a new single large-area ionization chamber, the integral quality monitor system (iRT, Germany), for online and real-time beam monitoring. Signal stability, monitor unit (MU) linearity and dose rate dependence were investigated for static and arc deliveries and compared to independent ionization chamber measurements. The dose verification capability of the transmission detector system was evaluated by comparing calculated and measured detector signals for 15 volumetric modulated arc therapy plans. The error detection sensitivity was tested by introducing MLC position and linac output errors. Deviations in dose distributions between the original and error-induced plans were compared in terms of detector signal deviation, dose-volume histogram (DVH) metrics and 2D γ-evaluation (2%/2 mm and 3%/3 mm). The detector signal is linearly dependent on linac output and shows negligible (<0.4%) dose rate dependence up to 460 MU min-1. Signal stability is within 1% for cumulative detector output; substantial variations were observed for the segment-by-segment signal. Calculated versus measured cumulative signal deviations ranged from -0.16%-2.25%. DVH, mean 2D γ-value and detector signal evaluations showed increasing deviations with regard to the respective reference with growing MLC and dose output errors; good correlation between DVH metrics and detector signal deviation was found (e.g. PTV D mean: R 2 = 0.97). Positional MLC errors of 1 mm and errors in linac output of 2% were identified with the transmission detector system. The extensive tests performed in this investigation show that the new transmission detector provides a stable and sensitive cumulative signal output and is suitable for beam monitoring during patient treatment.
Fuzzy logic control and optimization system
Lou, Xinsheng [West Hartford, CT
2012-04-17
A control system (300) for optimizing a power plant includes a chemical loop having an input for receiving an input signal (369) and an output for outputting an output signal (367), and a hierarchical fuzzy control system (400) operably connected to the chemical loop. The hierarchical fuzzy control system (400) includes a plurality of fuzzy controllers (330). The hierarchical fuzzy control system (400) receives the output signal (367), optimizes the input signal (369) based on the received output signal (367), and outputs an optimized input signal (369) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
Fast temporal neural learning using teacher forcing
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)
1992-01-01
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
Fast temporal neural learning using teacher forcing
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)
1995-01-01
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
Beam collimation and focusing and error analysis of LD and fiber coupling system based on ZEMAX
NASA Astrophysics Data System (ADS)
Qiao, Lvlin; Zhou, Dejian; Xiao, Lei
2017-10-01
Laser diodde has many advantages, such as high efficiency, small volume, low cost and easy integration, so it is widely used. Because of its poor beam quality, the application of semiconductor laser has also been seriously hampered. In view of the poor beam quality, the ZEMAX optical design software is used to simulate the far field characteristics of the semiconductor laser beam, and the coupling module of the semiconductor laser and the optical fiber is designed and optimized. And the beam is coupled into the fiber core diameter d=200µm, the numerical aperture NA=0.22 optical fiber, the output power can reach 95%. Finally, the influence of the three docking errors on the coupling efficiency during the installation process is analyzed.
Fast converging minimum probability of error neural network receivers for DS-CDMA communications.
Matyjas, John D; Psaromiligkos, Ioannis N; Batalama, Stella N; Medley, Michael J
2004-03-01
We consider a multilayer perceptron neural network (NN) receiver architecture for the recovery of the information bits of a direct-sequence code-division-multiple-access (DS-CDMA) user. We develop a fast converging adaptive training algorithm that minimizes the bit-error rate (BER) at the output of the receiver. The adaptive algorithm has three key features: i) it incorporates the BER, i.e., the ultimate performance evaluation measure, directly into the learning process, ii) it utilizes constraints that are derived from the properties of the optimum single-user decision boundary for additive white Gaussian noise (AWGN) multiple-access channels, and iii) it embeds importance sampling (IS) principles directly into the receiver optimization process. Simulation studies illustrate the BER performance of the proposed scheme.
Zou, An-Min; Dev Kumar, Krishna; Hou, Zeng-Guang
2010-09-01
This paper investigates the problem of output feedback attitude control of an uncertain spacecraft. Two robust adaptive output feedback controllers based on Chebyshev neural networks (CNN) termed adaptive neural networks (NN) controller-I and adaptive NN controller-II are proposed for the attitude tracking control of spacecraft. The four-parameter representations (quaternion) are employed to describe the spacecraft attitude for global representation without singularities. The nonlinear reduced-order observer is used to estimate the derivative of the spacecraft output, and the CNN is introduced to further improve the control performance through approximating the spacecraft attitude motion. The implementation of the basis functions of the CNN used in the proposed controllers depends only on the desired signals, and the smooth robust compensator using the hyperbolic tangent function is employed to counteract the CNN approximation errors and external disturbances. The adaptive NN controller-II can efficiently avoid the over-estimation problem (i.e., the bound of the CNNs output is much larger than that of the approximated unknown function, and hence, the control input may be very large) existing in the adaptive NN controller-I. Both adaptive output feedback controllers using CNN can guarantee that all signals in the resulting closed-loop system are uniformly ultimately bounded. For performance comparisons, the standard adaptive controller using the linear parameterization of spacecraft attitude motion is also developed. Simulation studies are presented to show the advantages of the proposed CNN-based output feedback approach over the standard adaptive output feedback approach.
Zhao, Guo; Wang, Hui; Liu, Gang; Wang, Zhiqiang
2016-09-21
An easy, but effective, method has been proposed to detect and quantify the Pb(II) in the presence of Cd(II) based on a Bi/glassy carbon electrode (Bi/GCE) with the combination of a back propagation artificial neural network (BP-ANN) and square wave anodic stripping voltammetry (SWASV) without further electrode modification. The effects of Cd(II) in different concentrations on stripping responses of Pb(II) was studied. The results indicate that the presence of Cd(II) will reduce the prediction precision of a direct calibration model. Therefore, a two-input and one-output BP-ANN was built for the optimization of a stripping voltammetric sensor, which considering the combined effects of Cd(II) and Pb(II) on the SWASV detection of Pb(II) and establishing the nonlinear relationship between the stripping peak currents of Pb(II) and Cd(II) and the concentration of Pb(II). The key parameters of the BP-ANN and the factors affecting the SWASV detection of Pb(II) were optimized. The prediction performance of direct calibration model and BP-ANN model were tested with regard to the mean absolute error (MAE), root mean square error (RMSE), average relative error (ARE), and correlation coefficient. The results proved that the BP-ANN model exhibited higher prediction accuracy than the direct calibration model. Finally, a real samples analysis was performed to determine trace Pb(II) in some soil specimens with satisfactory results.
Five-wave-packet quantum error correction based on continuous-variable cluster entanglement
Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi
2015-01-01
Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395
Optimization and performance of bifacial solar modules: A global perspective
Sun, Xingshu; Khan, Mohammad Ryyan; Deline, Chris; ...
2018-02-06
With the rapidly growing interest in bifacial photovoltaics (PV), a worldwide map of their potential performance can help assess and accelerate the global deployment of this emerging technology. However, the existing literature only highlights optimized bifacial PV for a few geographic locations or develops worldwide performance maps for very specific configurations, such as the vertical installation. It is still difficult to translate these location- and configuration-specific conclusions to a general optimized performance of this technology. In this paper, we present a global study and optimization of bifacial solar modules using a rigorous and comprehensive modeling framework. Our results demonstrate thatmore » with a low albedo of 0.25, the bifacial gain of ground-mounted bifacial modules is less than 10% worldwide. However, increasing the albedo to 0.5 and elevating modules 1 m above the ground can boost the bifacial gain to 30%. Moreover, we derive a set of empirical design rules, which optimize bifacial solar modules across the world and provide the groundwork for rapid assessment of the location-specific performance. We find that ground-mounted, vertical, east-west-facing bifacial modules will outperform their south-north-facing, optimally tilted counterparts by up to 15% below the latitude of 30 degrees, for an albedo of 0.5. The relative energy output is reversed in latitudes above 30 degrees. A detailed and systematic comparison with data from Asia, Africa, Europe, and North America validates the model presented in this paper.« less
Optimization and performance of bifacial solar modules: A global perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Xingshu; Khan, Mohammad Ryyan; Deline, Chris
With the rapidly growing interest in bifacial photovoltaics (PV), a worldwide map of their potential performance can help assess and accelerate the global deployment of this emerging technology. However, the existing literature only highlights optimized bifacial PV for a few geographic locations or develops worldwide performance maps for very specific configurations, such as the vertical installation. It is still difficult to translate these location- and configuration-specific conclusions to a general optimized performance of this technology. In this paper, we present a global study and optimization of bifacial solar modules using a rigorous and comprehensive modeling framework. Our results demonstrate thatmore » with a low albedo of 0.25, the bifacial gain of ground-mounted bifacial modules is less than 10% worldwide. However, increasing the albedo to 0.5 and elevating modules 1 m above the ground can boost the bifacial gain to 30%. Moreover, we derive a set of empirical design rules, which optimize bifacial solar modules across the world and provide the groundwork for rapid assessment of the location-specific performance. We find that ground-mounted, vertical, east-west-facing bifacial modules will outperform their south-north-facing, optimally tilted counterparts by up to 15% below the latitude of 30 degrees, for an albedo of 0.5. The relative energy output is reversed in latitudes above 30 degrees. A detailed and systematic comparison with data from Asia, Africa, Europe, and North America validates the model presented in this paper.« less
A Hybrid Multiuser Detector Based on MMSE and AFSA for TDRS System Forward Link
Yin, Zhendong; Liu, Xiaohui
2014-01-01
This study mainly focuses on multiuser detection in tracking and data relay satellite (TDRS) system forward link. Minimum mean square error (MMSE) is a low complexity multiuser detection method, but MMSE detector cannot achieve satisfactory bit error ratio and near-far resistance, whereas artificial fish swarm algorithm (AFSA) is expert in optimization and it can realize the global convergence efficiently. Therefore, a hybrid multiuser detector based on MMSE and AFSA (MMSE-AFSA) is proposed in this paper. The result of MMSE and its modified formations are used as the initial values of artificial fishes to accelerate the speed of global convergence and reduce the iteration times for AFSA. The simulation results show that the bit error ratio and near-far resistance performances of the proposed detector are much better, compared with MF, DEC, and MMSE, and are quite close to OMD. Furthermore, the proposed MMSE-AFSA detector also has a large system capacity. PMID:24883418
Keshavarz, M; Mojra, A
2015-05-01
Geometrical features of a cancerous tumor embedded in biological soft tissue, including tumor size and depth, are a necessity in the follow-up procedure and making suitable therapeutic decisions. In this paper, a new socio-politically motivated global search strategy which is called imperialist competitive algorithm (ICA) is implemented to train a feed forward neural network (FFNN) to estimate the tumor's geometrical characteristics (FFNNICA). First, a viscoelastic model of liver tissue is constructed by using a series of in vitro uniaxial and relaxation test data. Then, 163 samples of the tissue including a tumor with different depths and diameters are generated by making use of PYTHON programming to link the ABAQUS and MATLAB together. Next, the samples are divided into 123 samples as training dataset and 40 samples as testing dataset. Training inputs of the network are mechanical parameters extracted from palpation of the tissue through a developing noninvasive technology called artificial tactile sensing (ATS). Last, to evaluate the FFNNICA performance, outputs of the network including tumor's depth and diameter are compared with desired values for both training and testing datasets. Deviations of the outputs from desired values are calculated by a regression analysis. Statistical analysis is also performed by measuring Root Mean Square Error (RMSE) and Efficiency (E). RMSE in diameter and depth estimations are 0.50 mm and 1.49, respectively, for the testing dataset. Results affirm that the proposed optimization algorithm for training neural network can be useful to characterize soft tissue tumors accurately by employing an artificial palpation approach. Copyright © 2015 John Wiley & Sons, Ltd.
Regional Climate Models Downscaling in the Alpine Area with Multimodel SuperEnsemble
NASA Astrophysics Data System (ADS)
Cane, D.; Barbarino, S.; Renier, L.; Ronchi, C.
2012-04-01
The climatic scenarios show a strong signal of warming in the Alpine area already for the mid XXI century. The climate simulation, however, even when obtained with Regional Climate Models (RCMs), are affected by strong errors where compared with observations in the control period, due to their difficulties in representing the complex orography of the Alps and limitations in their physical parametrization. In this work we use a selection of RCMs runs from the ENSEMBLES project, carefully chosen in order to maximise the variety of leading Global Climate Models and of the RCMs themselves, calculated on the SRES scenario A1B. The reference observation for the Greater Alpine Area are extracted from the European dataset E-OBS produced by the project ENSEMBLES with an available resolution of 25 km. For the study area of Piemonte daily temperature and precipitation observations (1957-present) were carefully gridded on a 14-km grid over Piemonte Region with an Optimal Interpolation technique. We applied the Multimodel SuperEnsemble technique to temperature fields, reducing the high biases of RCMs temperature field compared to observations in the control period. We propose also the first application to RCMs of a brand new probabilistic Multimodel SuperEnsemble Dressing technique to estimate precipitation fields, already applied successfully to weather forecast models, with careful description of precipitation Probability Density Functions conditioned to the model outputs. This technique reduces the strong precipitation overestimation by RCMs over the alpine chain and reproduces the monthly behaviour of observed precipitation in the control period far better than the direct model outputs.
Comparison of global optimization approaches for robust calibration of hydrologic model parameters
NASA Astrophysics Data System (ADS)
Jung, I. W.
2015-12-01
Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
Additive Runge-Kutta Schemes for Convection-Diffusion-Reaction Equations
NASA Technical Reports Server (NTRS)
Kennedy, Christopher A.; Carpenter, Mark H.
2001-01-01
Additive Runge-Kutta (ARK) methods are investigated for application to the spatially discretized one-dimensional convection-diffusion-reaction (CDR) equations. First, accuracy, stability, conservation, and dense output are considered for the general case when N different Runge-Kutta methods are grouped into a single composite method. Then, implicit-explicit, N = 2, additive Runge-Kutta ARK2 methods from third- to fifth-order are presented that allow for integration of stiff terms by an L-stable, stiffly-accurate explicit, singly diagonally implicit Runge-Kutta (ESDIRK) method while the nonstiff terms are integrated with a traditional explicit Runge-Kutta method (ERK). Coupling error terms are of equal order to those of the elemental methods. Derived ARK2 methods have vanishing stability functions for very large values of the stiff scaled eigenvalue, z(exp [I]) goes to infinity, and retain high stability efficiency in the absence of stiffness, z(exp [I]) goes to zero. Extrapolation-type stage-value predictors are provided based on dense-output formulae. Optimized methods minimize both leading order ARK2 error terms and Butcher coefficient magnitudes as well as maximize conservation properties. Numerical tests of the new schemes on a CDR problem show negligible stiffness leakage and near classical order convergence rates. However, tests on three simple singular-perturbation problems reveal generally predictable order reduction. Error control is best managed with a PID-controller. While results for the fifth-order method are disappointing, both the new third- and fourth-order methods are at least as efficient as existing ARK2 methods while offering error control and stage-value predictors.
An extensometer for global measurement of bone strain suitable for use in vivo in humans
NASA Technical Reports Server (NTRS)
Perusek, G. P.; Davis, B. L.; Sferra, J. J.; Courtney, A. C.; D'Andrea, S. E.
2001-01-01
An axial extensometer able to measure global bone strain magnitudes and rates encountered during physiological activity, and suitable for use in vivo in human subjects, is described. The extensometer uses paired capacitive sensors mounted to intraosseus pins and allows measurement of strain due to bending in the plane of the extensometer as well as uniaxial compression or tension. Data are presented for validation of the device against a surface-mounted strain gage in an acrylic specimen under dynamic four-point bending, with square wave and sinusoidal loading inputs up to 1500 mu epsilon and 20 Hz, representative of physiological strain magnitudes and frequencies. Pearson's correlation coefficient (r) between extensometer and strain gage ranged from 0.960 to 0.999. Mean differences between extensometer and strain gage ranged up to 15.3 mu epsilon. Errors in the extensometer output were directly proportional to the degree of bending that occurs in the specimen, however, these errors were predictable and less than 1 mu epsilon for the loading regime studied. The device is capable of tracking strain rates in excess of 90,000 mu epsilon/s.
Monthly mean simulation experiments with a course-mesh global atmospheric model
NASA Technical Reports Server (NTRS)
Spar, J.; Klugman, R.; Lutz, R. J.; Notario, J. J.
1978-01-01
Substitution of observed monthly mean sea-surface temperatures (SSTs) as lower boundary conditions, in place of climatological SSTs, failed to improve the model simulations. While the impact of SST anomalies on the model output is greater at sea level than at upper levels the impact on the monthly mean simulations is not beneficial at any level. Shifts of one and two days in initialization time produced small, but non-trivial, changes in the model-generated monthly mean synoptic fields. No improvements in the mean simulations resulted from the use of either time-averaged initial data or re-initialization with time-averaged early model output. The noise level of the model, as determined from a multiple initial state perturbation experiment, was found to be generally low, but with a noisier response to initial state errors in high latitudes than the tropics.
Decay of motor memories in the absence of error
Vaswani, Pavan A.; Shadmehr, Reza
2013-01-01
When motor commands are accompanied by an unexpected outcome, the resulting error induces changes in subsequent commands. However, when errors are artificially eliminated, changes in motor commands are not sustained, but show decay. Why does the adaptation-induced change in motor output decay in the absence of error? A prominent idea is that decay reflects the stability of the memory. We show results that challenge this idea and instead suggest that motor output decays because the brain actively disengages a component of the memory. Humans adapted their reaching movements to a perturbation and were then introduced to a long period of trials in which errors were absent (error-clamp). We found that, in some subjects, motor output did not decay at the onset of the error-clamp block, but a few trials later. We manipulated the kinematics of movements in the error-clamp block and found that as movements became more similar to subjects’ natural movements in the perturbation block, the lag to decay onset became longer and eventually reached hundreds of trials. Furthermore, when there was decay in the motor output, the endpoint of decay was not zero, but a fraction of the motor memory that was last acquired. Therefore, adaptation to a perturbation installed two distinct kinds of memories: one that was disengaged when the brain detected a change in the task, and one that persisted despite it. Motor memories showed little decay in the absence of error if the brain was prevented from detecting a change in task conditions. PMID:23637163
The free-energy principle: a unified brain theory?
Friston, Karl
2010-02-01
A free-energy principle has been proposed recently that accounts for action, perception and learning. This Review looks at some key brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective. Crucially, one key theme runs through each of these theories - optimization. Furthermore, if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework.
NASA Astrophysics Data System (ADS)
Nasr, M.; Anwar, S.; El-Tamimi, A.; Pervaiz, S.
2018-04-01
Titanium and its alloys e.g. Ti6Al4V have widespread applications in aerospace, automotive and medical industry. At the same time titanium and its alloys are regarded as difficult to machine materials due to their high strength and low thermal conductivity. Significant efforts have been dispensed to improve the accuracy of the machining processes for Ti6Al4V. The current study present the use of the rotary ultrasonic drilling (RUD) process for machining high quality holes in Ti6Al4V. The study takes into account the effects of the main RUD input parameters including spindle speed, ultrasonic power, feed rate and tool diameter on the key output responses related to the accuracy of the drilled holes including cylindricity and overcut errors. Analysis of variance (ANOVA) was employed to study the influence of the input parameters on cylindricity and overcut error. Later, regression models were developed to find the optimal set of input parameters to minimize the cylindricity and overcut errors.
Optimal control of nonlinear continuous-time systems in strict-feedback form.
Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani
2015-10-01
This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.
NASA Astrophysics Data System (ADS)
Aittokoski, Timo; Miettinen, Kaisa
2008-07-01
Solving real-life engineering problems can be difficult because they often have multiple conflicting objectives, the objective functions involved are highly nonlinear and they contain multiple local minima. Furthermore, function values are often produced via a time-consuming simulation process. These facts suggest the need for an automated optimization tool that is efficient (in terms of number of objective function evaluations) and capable of solving global and multiobjective optimization problems. In this article, the requirements on a general simulation-based optimization system are discussed and such a system is applied to optimize the performance of a two-stroke combustion engine. In the example of a simulation-based optimization problem, the dimensions and shape of the exhaust pipe of a two-stroke engine are altered, and values of three conflicting objective functions are optimized. These values are derived from power output characteristics of the engine. The optimization approach involves interactive multiobjective optimization and provides a convenient tool to balance between conflicting objectives and to find good solutions.
NASA Astrophysics Data System (ADS)
Terando, A. J.; Grade, S.; Bowden, J.; Henareh Khalyani, A.; Wootten, A.; Misra, V.; Collazo, J.; Gould, W. A.; Boyles, R.
2016-12-01
Sub-tropical island nations may be particularly vulnerable to anthropogenic climate change because of predicted changes in the hydrologic cycle that would lead to significant drying in the future. However, decision makers in these regions have seen their adaptation planning efforts frustrated by the lack of island-resolving climate model information. Recently, two investigations have used statistical and dynamical downscaling techniques to develop climate change projections for the U.S. Caribbean region (Puerto Rico and U.S. Virgin Islands). We compare the results from these two studies with respect to three commonly downscaled CMIP5 global climate models (GCMs). The GCMs were dynamically downscaled at a convective-permitting scale using two different regional climate models. The statistical downscaling approach was conducted at locations with long-term climate observations and then further post-processed using climatologically aided interpolation (yielding two sets of projections). Overall, both approaches face unique challenges. The statistical approach suffers from a lack of observations necessary to constrain the model, particularly at the land-ocean boundary and in complex terrain. The dynamically downscaled model output has a systematic dry bias over the island despite ample availability of moisture in the atmospheric column. Notwithstanding these differences, both approaches are consistent in projecting a drier climate that is driven by the strong global-scale anthropogenic forcing.
NASA Astrophysics Data System (ADS)
Afrand, Masoud; Hemmat Esfe, Mohammad; Abedini, Ehsan; Teimouri, Hamid
2017-03-01
The current paper first presents an empirical correlation based on experimental results for estimating thermal conductivity enhancement of MgO-water nanofluid using curve fitting method. Then, artificial neural networks (ANNs) with various numbers of neurons have been assessed by considering temperature and MgO volume fraction as the inputs variables and thermal conductivity enhancement as the output variable to select the most appropriate and optimized network. Results indicated that the network with 7 neurons had minimum error. Eventually, the output of artificial neural network was compared with the results of the proposed empirical correlation and those of the experiments. Comparisons revealed that ANN modeling was more accurate than curve-fitting method in the predicting the thermal conductivity enhancement of the nanofluid.
A Formal Approach to Empirical Dynamic Model Optimization and Validation
NASA Technical Reports Server (NTRS)
Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.
NASA Astrophysics Data System (ADS)
Knoefel, Patrick; Loew, Fabian; Conrad, Christopher
2015-04-01
Crop maps based on classification of remotely sensed data are of increased attendance in agricultural management. This induces a more detailed knowledge about the reliability of such spatial information. However, classification of agricultural land use is often limited by high spectral similarities of the studied crop types. More, spatially and temporally varying agro-ecological conditions can introduce confusion in crop mapping. Classification errors in crop maps in turn may have influence on model outputs, like agricultural production monitoring. One major goal of the PhenoS project ("Phenological structuring to determine optimal acquisition dates for Sentinel-2 data for field crop classification"), is the detection of optimal phenological time windows for land cover classification purposes. Since many crop species are spectrally highly similar, accurate classification requires the right selection of satellite images for a certain classification task. In the course of one growing season, phenological phases exist where crops are separable with higher accuracies. For this purpose, coupling of multi-temporal spectral characteristics and phenological events is promising. The focus of this study is set on the separation of spectrally similar cereal crops like winter wheat, barley, and rye of two test sites in Germany called "Harz/Central German Lowland" and "Demmin". However, this study uses object based random forest (RF) classification to investigate the impact of image acquisition frequency and timing on crop classification uncertainty by permuting all possible combinations of available RapidEye time series recorded on the test sites between 2010 and 2014. The permutations were applied to different segmentation parameters. Then, classification uncertainty was assessed and analysed, based on the probabilistic soft-output from the RF algorithm at the per-field basis. From this soft output, entropy was calculated as a spatial measure of classification uncertainty. The results indicate that uncertainty estimates provide a valuable addition to traditional accuracy assessments and helps the user to allocate error in crop maps.
NASA Astrophysics Data System (ADS)
Pichardo, Samuel; Moreno-Hernández, Carlos; Drainville, Robert Andrew; Sin, Vivian; Curiel, Laura; Hynynen, Kullervo
2017-09-01
A better understanding of ultrasound transmission through the human skull is fundamental to develop optimal imaging and therapeutic applications. In this study, we present global attenuation values and functions that correlate apparent density calculated from computed tomography scans to shear speed of sound. For this purpose, we used a model for sound propagation based on the viscoelastic wave equation (VWE) assuming isotropic conditions. The model was validated using a series of measurements with plates of different plastic materials and angles of incidence of 0°, 15° and 50°. The optimal functions for transcranial ultrasound propagation were established using the VWE, scan measurements of transcranial propagation with an angle of incidence of 40° and a genetic optimization algorithm. Ten (10) locations over three (3) skulls were used for ultrasound frequencies of 270 kHz and 836 kHz. Results with plastic materials demonstrated that the viscoelastic modeling predicted both longitudinal and shear propagation with an average (±s.d.) error of 9(±7)% of the wavelength in the predicted delay and an error of 6.7(±5)% in the estimation of transmitted power. Using the new optimal functions of speed of sound and global attenuation for the human skull, the proposed model predicted the transcranial ultrasound transmission for a frequency of 270 kHz with an expected error in the predicted delay of 5(±2.7)% of the wavelength. The sound propagation model predicted accurately the sound propagation regardless of either shear or longitudinal sound transmission dominated. For 836 kHz, the model predicted accurately in average with an error in the predicted delay of 17(±16)% of the wavelength. Results indicated the importance of the specificity of the information at a voxel level to better understand ultrasound transmission through the skull. These results and new model will be very valuable tools for the future development of transcranial applications of ultrasound therapy and imaging.
A Comparison of the Forecast Skills among Three Numerical Models
NASA Astrophysics Data System (ADS)
Lu, D.; Reddy, S. R.; White, L. J.
2003-12-01
Three numerical weather forecast models, MM5, COAMPS and WRF, operating with a joint effort of NOAA HU-NCAS and Jackson State University (JSU) during summer 2003 have been chosen to study their forecast skills against observations. The models forecast over the same region with the same initialization, boundary condition, forecast length and spatial resolution. AVN global dataset have been ingested as initial conditions. Grib resolution of 27 km is chosen to represent the current mesoscale model. The forecasts with the length of 36h are performed to output the result with 12h interval. The key parameters used to evaluate the forecast skill include 12h accumulated precipitation, sea level pressure, wind, surface temperature and dew point. Precipitation is evaluated statistically using conventional skill scores, Threat Score (TS) and Bias Score (BS), for different threshold values based on 12h rainfall observations whereas other statistical methods such as Mean Error (ME), Mean Absolute Error(MAE) and Root Mean Square Error (RMSE) are applied to other forecast parameters.
Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin
2017-11-01
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Völker, Martin; Fiederer, Lukas D J; Berberich, Sofie; Hammer, Jiří; Behncke, Joos; Kršek, Pavel; Tomášek, Martin; Marusič, Petr; Reinacher, Peter C; Coenen, Volker A; Helias, Moritz; Schulze-Bonhage, Andreas; Burgard, Wolfram; Ball, Tonio
2018-06-01
Error detection in motor behavior is a fundamental cognitive function heavily relying on local cortical information processing. Neural activity in the high-gamma frequency band (HGB) closely reflects such local cortical processing, but little is known about its role in error processing, particularly in the healthy human brain. Here we characterize the error-related response of the human brain based on data obtained with noninvasive EEG optimized for HGB mapping in 31 healthy subjects (15 females, 16 males), and additional intracranial EEG data from 9 epilepsy patients (4 females, 5 males). Our findings reveal a multiscale picture of the global and local dynamics of error-related HGB activity in the human brain. On the global level as reflected in the noninvasive EEG, the error-related response started with an early component dominated by anterior brain regions, followed by a shift to parietal regions, and a subsequent phase characterized by sustained parietal HGB activity. This phase lasted for more than 1 s after the error onset. On the local level reflected in the intracranial EEG, a cascade of both transient and sustained error-related responses involved an even more extended network, spanning beyond frontal and parietal regions to the insula and the hippocampus. HGB mapping appeared especially well suited to investigate late, sustained components of the error response, possibly linked to downstream functional stages such as error-related learning and behavioral adaptation. Our findings establish the basic spatio-temporal properties of HGB activity as a neural correlate of error processing, complementing traditional error-related potential studies. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
GeMS: an advanced software package for designing synthetic genes.
Jayaraj, Sebastian; Reid, Ralph; Santi, Daniel V
2005-01-01
A user-friendly, advanced software package for gene design is described. The software comprises an integrated suite of programs-also provided as stand-alone tools-that automatically performs the following tasks in gene design: restriction site prediction, codon optimization for any expression host, restriction site inclusion and exclusion, separation of long sequences into synthesizable fragments, T(m) and stem-loop determinations, optimal oligonucleotide component design and design verification/error-checking. The output is a complete design report and a list of optimized oligonucleotides to be prepared for subsequent gene synthesis. The user interface accommodates both inexperienced and experienced users. For inexperienced users, explanatory notes are provided such that detailed instructions are not necessary; for experienced users, a streamlined interface is provided without such notes. The software has been extensively tested in the design and successful synthesis of over 400 kb of genes, many of which exceeded 5 kb in length.
Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods.
Gonzalez-Navarro, Felix F; Stilianova-Stoytcheva, Margarita; Renteria-Gutierrez, Livier; Belanche-Muñoz, Lluís A; Flores-Rios, Brenda L; Ibarra-Esquer, Jorge E
2016-10-26
Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB) modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization.
Piehowski, Paul D; Petyuk, Vladislav A; Sandoval, John D; Burnum, Kristin E; Kiebel, Gary R; Monroe, Matthew E; Anderson, Gordon A; Camp, David G; Smith, Richard D
2013-03-01
For bottom-up proteomics, there are wide variety of database-searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid-search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection--referred to as STEPS--utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true-positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Relabeling exchange method (REM) for learning in neural networks
NASA Astrophysics Data System (ADS)
Wu, Wen; Mammone, Richard J.
1994-02-01
The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.
Adaptive torque estimation of robot joint with harmonic drive transmission
NASA Astrophysics Data System (ADS)
Shi, Zhiguo; Li, Yuankai; Liu, Guangjun
2017-11-01
Robot joint torque estimation using input and output position measurements is a promising technique, but the result may be affected by the load variation of the joint. In this paper, a torque estimation method with adaptive robustness and optimality adjustment according to load variation is proposed for robot joint with harmonic drive transmission. Based on a harmonic drive model and a redundant adaptive robust Kalman filter (RARKF), the proposed approach can adapt torque estimation filtering optimality and robustness to the load variation by self-tuning the filtering gain and self-switching the filtering mode between optimal and robust. The redundant factor of RARKF is designed as a function of the motor current for tolerating the modeling error and load-dependent filtering mode switching. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor and two representative filtering methods. The results have demonstrated the effectiveness of the proposed torque estimation technique.
NASA Astrophysics Data System (ADS)
Gao, H.; Zhang, S.; Nijssen, B.; Zhou, T.; Voisin, N.; Sheffield, J.; Lee, K.; Shukla, S.; Lettenmaier, D. P.
2017-12-01
Despite its errors and uncertainties, the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis real-time product (TMPA-RT) has been widely used for hydrological monitoring and forecasting due to its timely availability for real-time applications. To evaluate the utility of TMPA-RT in hydrologic predictions, many studies have compared modeled streamflows driven by TMPA-RT against gauge data. However, because of the limited availability of streamflow observations in data sparse regions, there is still a lack of comprehensive comparisons for TMPA-RT based hydrologic predictions at the global scale. Furthermore, it is expected that its skill is less optimal at the subbasin scale than the basin scale. In this study, we evaluate and characterize the utility of the TMPA-RT product over selected global river basins during the period of 1998 to 2015 using the TMPA research product (TMPA-RP) as a reference. The Variable Infiltration Capacity (VIC) model, which was calibrated and validated previously, is adopted to simulate streamflows driven by TMPA-RT and TMPA-RP, respectively. The objective of this study is to analyze the spatial and temporal characteristics of the hydrologic predictions by answering the following questions: (1) How do the precipitation errors associated with the TMPA-RT product transform into streamflow errors with respect to geographical and climatological characteristics? (2) How do streamflow errors vary across scales within a basin?
Online Optimization Method for Operation of Generators in a Micro Grid
NASA Astrophysics Data System (ADS)
Hayashi, Yasuhiro; Miyamoto, Hideki; Matsuki, Junya; Iizuka, Toshio; Azuma, Hitoshi
Recently a lot of studies and developments about distributed generator such as photovoltaic generation system, wind turbine generation system and fuel cell have been performed under the background of the global environment issues and deregulation of the electricity market, and the technique of these distributed generators have progressed. Especially, micro grid which consists of several distributed generators, loads and storage battery is expected as one of the new operation system of distributed generator. However, since precipitous load fluctuation occurs in micro grid for the reason of its smaller capacity compared with conventional power system, high-accuracy load forecasting and control scheme to balance of supply and demand are needed. Namely, it is necessary to improve the precision of operation in micro grid by observing load fluctuation and correcting start-stop schedule and output of generators online. But it is not easy to determine the operation schedule of each generator in short time, because the problem to determine start-up, shut-down and output of each generator in micro grid is a mixed integer programming problem. In this paper, the authors propose an online optimization method for the optimal operation schedule of generators in micro grid. The proposed method is based on enumeration method and particle swarm optimization (PSO). In the proposed method, after picking up all unit commitment patterns of each generators satisfied with minimum up time and minimum down time constraint by using enumeration method, optimal schedule and output of generators are determined under the other operational constraints by using PSO. Numerical simulation is carried out for a micro grid model with five generators and photovoltaic generation system in order to examine the validity of the proposed method.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2016-01-01
This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.
1994-01-01
Presented is a feasibility and error analysis for a hypersonic flush airdata system on a hypersonic flight experiment (HYFLITE). HYFLITE heating loads make intrusive airdata measurement impractical. Although this analysis is specifically for the HYFLITE vehicle and trajectory, the problems analyzed are generally applicable to hypersonic vehicles. A layout of the flush-port matrix is shown. Surface pressures are related airdata parameters using a simple aerodynamic model. The model is linearized using small perturbations and inverted using nonlinear least-squares. Effects of various error sources on the overall uncertainty are evaluated using an error simulation. Error sources modeled include boundarylayer/viscous interactions, pneumatic lag, thermal transpiration in the sensor pressure tubing, misalignment in the matrix layout, thermal warping of the vehicle nose, sampling resolution, and transducer error. Using simulated pressure data for input to the estimation algorithm, effects caused by various error sources are analyzed by comparing estimator outputs with the original trajectory. To obtain ensemble averages the simulation is run repeatedly and output statistics are compiled. Output errors resulting from the various error sources are presented as a function of Mach number. Final uncertainties with all modeled error sources included are presented as a function of Mach number.
Coordinate alignment of combined measurement systems using a modified common points method
NASA Astrophysics Data System (ADS)
Zhao, G.; Zhang, P.; Xiao, W.
2018-03-01
The co-ordinate metrology has been extensively researched for its outstanding advantages in measurement range and accuracy. The alignment of different measurement systems is usually achieved by integrating local coordinates via common points before measurement. The alignment errors would accumulate and significantly reduce the global accuracy, thus need to be minimized. In this thesis, a modified common points method (MCPM) is proposed to combine different traceable system errors of the cooperating machines, and optimize the global accuracy by introducing mutual geometric constraints. The geometric constraints, obtained by measuring the common points in individual local coordinate systems, provide the possibility to reduce the local measuring uncertainty whereby enhance the global measuring certainty. A simulation system is developed in Matlab to analyze the feature of MCPM using the Monto-Carlo method. An exemplary setup is constructed to verify the feasibility and efficiency of the proposed method associated with laser tracker and indoor iGPS systems. Experimental results show that MCPM could significantly improve the alignment accuracy.
Adaptive Fuzzy Bounded Control for Consensus of Multiple Strict-Feedback Nonlinear Systems.
Wang, Wei; Tong, Shaocheng
2018-02-01
This paper studies the adaptive fuzzy bounded control problem for leader-follower multiagent systems, where each follower is modeled by the uncertain nonlinear strict-feedback system. Combining the fuzzy approximation with the dynamic surface control, an adaptive fuzzy control scheme is developed to guarantee the output consensus of all agents under directed communication topologies. Different from the existing results, the bounds of the control inputs are known as a priori, and they can be determined by the feedback control gains. To realize smooth and fast learning, a predictor is introduced to estimate each error surface, and the corresponding predictor error is employed to learn the optimal fuzzy parameter vector. It is proved that the developed adaptive fuzzy control scheme guarantees the uniformly ultimate boundedness of the closed-loop systems, and the tracking error converges to a small neighborhood of the origin. The simulation results and comparisons are provided to show the validity of the control strategy presented in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behrang, M.A.; Assareh, E.; Ghanbarzadeh, A.
2010-08-15
The main objective of present study is to predict daily global solar radiation (GSR) on a horizontal surface, based on meteorological variables, using different artificial neural network (ANN) techniques. Daily mean air temperature, relative humidity, sunshine hours, evaporation, and wind speed values between 2002 and 2006 for Dezful city in Iran (32 16'N, 48 25'E), are used in this study. In order to consider the effect of each meteorological variable on daily GSR prediction, six following combinations of input variables are considered: (I)Day of the year, daily mean air temperature and relative humidity as inputs and daily GSR as output.more » (II)Day of the year, daily mean air temperature and sunshine hours as inputs and daily GSR as output. (III)Day of the year, daily mean air temperature, relative humidity and sunshine hours as inputs and daily GSR as output. (IV)Day of the year, daily mean air temperature, relative humidity, sunshine hours and evaporation as inputs and daily GSR as output. (V)Day of the year, daily mean air temperature, relative humidity, sunshine hours and wind speed as inputs and daily GSR as output. (VI)Day of the year, daily mean air temperature, relative humidity, sunshine hours, evaporation and wind speed as inputs and daily GSR as output. Multi-layer perceptron (MLP) and radial basis function (RBF) neural networks are applied for daily GSR modeling based on six proposed combinations. The measured data between 2002 and 2005 are used to train the neural networks while the data for 214 days from 2006 are used as testing data. The comparison of obtained results from ANNs and different conventional GSR prediction (CGSRP) models shows very good improvements (i.e. the predicted values of best ANN model (MLP-V) has a mean absolute percentage error (MAPE) about 5.21% versus 10.02% for best CGSRP model (CGSRP 5)). (author)« less
An Optimal Current Observer for Predictive Current Controlled Buck DC-DC Converters
Min, Run; Chen, Chen; Zhang, Xiaodong; Zou, Xuecheng; Tong, Qiaoling; Zhang, Qiao
2014-01-01
In digital current mode controlled DC-DC converters, conventional current sensors might not provide isolation at a minimized price, power loss and size. Therefore, a current observer which can be realized based on the digital circuit itself, is a possible substitute. However, the observed current may diverge due to the parasitic resistors and the forward conduction voltage of the diode. Moreover, the divergence of the observed current will cause steady state errors in the output voltage. In this paper, an optimal current observer is proposed. It achieves the highest observation accuracy by compensating for all the known parasitic parameters. By employing the optimal current observer-based predictive current controller, a buck converter is implemented. The converter has a convergently and accurately observed inductor current, and shows preferable transient response than the conventional voltage mode controlled converter. Besides, costs, power loss and size are minimized since the strategy requires no additional hardware for current sensing. The effectiveness of the proposed optimal current observer is demonstrated experimentally. PMID:24854061
Optimization of Fish Protection System to Increase Technosphere Safety
NASA Astrophysics Data System (ADS)
Khetsuriani, E. D.; Fesenko, L. N.; Larin, D. S.
2017-11-01
The article is concerned with field study data. Drawing upon prior information and considering structural features of fish protection devices, we decided to conduct experimental research while changing three parameters: process pressure PCT, stream velocity Vp and washer nozzle inclination angle αc. The variability intervals of examined factors are shown in the Table 1. The conicity angle was assumed as a constant one. The box design B3 was chosen as a baseline being close to D-optimal designs in its statistical characteristics. The number of device rotations and its fish fry protection efficiency were accepted as the output functions of optimization. The numerical values of regression coefficients of quadratic equations describing the behavior of optimization functions Y1 and Y2 and their formulaic errors were calculated upon the test results in accordance with the planning matrix. The adequacy or inadequacy of the obtained quadratic regression model is judged via checking the condition whether Fexp ≤ Ftheor.
Optimal nonlinear codes for the perception of natural colours.
von der Twer, T; MacLeod, D I
2001-08-01
We discuss how visual nonlinearity can be optimized for the precise representation of environmental inputs. Such optimization leads to neural signals with a compressively nonlinear input-output function the gradient of which is matched to the cube root of the probability density function (PDF) of the environmental input values (and not to the PDF directly as in histogram equalization). Comparisons between theory and psychophysical and electrophysiological data are roughly consistent with the idea that parvocellular (P) cells are optimized for precision representation of colour: their contrast-response functions span a range appropriately matched to the environmental distribution of natural colours along each dimension of colour space. Thus P cell codes for colour may have been selected to minimize error in the perceptual estimation of stimulus parameters for natural colours. But magnocellular (M) cells have a much stronger than expected saturating nonlinearity; this supports the view that the function of M cells is mainly to detect boundaries rather than to specify contrast or lightness.
NASA Technical Reports Server (NTRS)
Broussard, J. R.; Halyo, N.
1984-01-01
This report contains the development of a digital outer-loop three dimensional radio navigation (3-D RNAV) flight control system for a small commercial jet transport. The outer-loop control system is designed using optimal stochastic limited state feedback techniques. Options investigated using the optimal limited state feedback approach include integrated versus hierarchical control loop designs, 20 samples per second versus 5 samples per second outer-loop operation and alternative Type 1 integration command errors. Command generator tracking techniques used in the digital control design enable the jet transport to automatically track arbitrary curved flight paths generated by waypoints. The performance of the design is demonstrated using detailed nonlinear aircraft simulations in the terminal area, frequency domain multi-input sigma plots, frequency domain single-input Bode plots and closed-loop poles. The response of the system to a severe wind shear during a landing approach is also presented.
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
Autonomous frequency domain identification: Theory and experiment
NASA Technical Reports Server (NTRS)
Yam, Yeung; Bayard, D. S.; Hadaegh, F. Y.; Mettler, E.; Milman, M. H.; Scheid, R. E.
1989-01-01
The analysis, design, and on-orbit tuning of robust controllers require more information about the plant than simply a nominal estimate of the plant transfer function. Information is also required concerning the uncertainty in the nominal estimate, or more generally, the identification of a model set within which the true plant is known to lie. The identification methodology that was developed and experimentally demonstrated makes use of a simple but useful characterization of the model uncertainty based on the output error. This is a characterization of the additive uncertainty in the plant model, which has found considerable use in many robust control analysis and synthesis techniques. The identification process is initiated by a stochastic input u which is applied to the plant p giving rise to the output. Spectral estimation (h = P sub uy/P sub uu) is used as an estimate of p and the model order is estimated using the produce moment matrix (PMM) method. A parametric model unit direction vector p is then determined by curve fitting the spectral estimate to a rational transfer function. The additive uncertainty delta sub m = p - unit direction vector p is then estimated by the cross spectral estimate delta = P sub ue/P sub uu where e = y - unit direction vectory y is the output error, and unit direction vector y = unit direction vector pu is the computed output of the parametric model subjected to the actual input u. The experimental results demonstrate the curve fitting algorithm produces the reduced-order plant model which minimizes the additive uncertainty. The nominal transfer function estimate unit direction vector p and the estimate delta of the additive uncertainty delta sub m are subsequently available to be used for optimization of robust controller performance and stability.
Scolletta, Sabino; Franchi, Federico; Romagnoli, Stefano; Carlà, Rossella; Donati, Abele; Fabbri, Lea P; Forfori, Francesco; Alonso-Iñigo, José M; Laviola, Silvia; Mangani, Valerio; Maj, Giulia; Martinelli, Giampaolo; Mirabella, Lucia; Morelli, Andrea; Persona, Paolo; Payen, Didier
2016-07-01
Echocardiography and pulse contour methods allow, respectively, noninvasive and less invasive cardiac output estimation. The aim of the present study was to compare Doppler echocardiography with the pulse contour method MostCare for cardiac output estimation in a large and nonselected critically ill population. A prospective multicenter observational comparison study. The study was conducted in 15 European medicosurgical ICUs. We assessed cardiac output in 400 patients in whom an echocardiographic evaluation was performed as a routine need or for cardiocirculatory assessment. None. One echocardiographic cardiac output measurement was compared with the corresponding MostCare cardiac output value per patient, considering different ICU admission categories and clinical conditions. For statistical analysis, we used Bland-Altman and linear regression analyses. To assess heterogeneity in results of individual centers, Cochran Q, and the I statistics were applied. A total of 400 paired echocardiographic cardiac output and MostCare cardiac output measures were compared. MostCare cardiac output values ranged from 1.95 to 9.90 L/min, and echocardiographic cardiac output ranged from 1.82 to 9.75 L/min. A significant correlation was found between echocardiographic cardiac output and MostCare cardiac output (r = 0.85; p < 0.0001). Among the different ICUs, the mean bias between echocardiographic cardiac output and MostCare cardiac output ranged from -0.40 to 0.45 L/min, and the percentage error ranged from 13.2% to 47.2%. Overall, the mean bias was -0.03 L/min, with 95% limits of agreement of -1.54 to 1.47 L/min and a relative percentage error of 30.1%. The percentage error was 24% in the sepsis category, 26% in the trauma category, 30% in the surgical category, and 33% in the medical admission category. The final overall percentage error was 27.3% with a 95% CI of 22.2-32.4%. Our results suggest that MostCare could be an alternative to echocardiography to assess cardiac output in ICU patients with a large spectrum of clinical conditions.
Patient safety: the landscape of the global research output and gender distribution.
Schreiber, Moritz; Klingelhöfer, Doris; Groneberg, David A; Brüggmann, Doerthe
2016-02-12
Patient safety is a crucial issue in medicine. Its main objective is to reduce the number of deaths and health damages that are caused by preventable medical errors. To achieve this, it needs better health systems that make mistakes less likely and their effects less detrimental without blaming health workers for failures. Until now, there is no in-depth scientometric analysis on this issue that encompasses the interval between 1963 and 2014. Therefore, the aim of this study is to sketch a landscape of the past global research output on patient safety including the gender distribution of the medical discipline of patient safety by interpreting scientometric parameters. Additionally, respective future trends are to be outlined. The Core Collection of the scientific database Web of Science was searched for publications with the search term 'Patient Safety' as title word that was focused on the corresponding medical discipline. The resulting data set was analysed by using the methodology implemented by the platform NewQIS. To visualise the geographical landscape, state-of-the-art techniques including density-equalising map projections were applied. 4079 articles on patient safety were identified in the period from 1900 to 2014. Most articles were published in North America, the UK and Australia. In regard to the overall number of publications, the USA is the leading country, while the output ratio to the population of Switzerland was found to exhibit the best performance. With regard to the ratio of the number of publications to the Gross Domestic Product (GDP) per Capita, the USA remains the leading nation but countries like India and China with a low GDP and high population numbers are also profiting. Though the topic is a global matter, the scientific output on patient safety is centred mainly in industrialised countries. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Decision Fusion with Channel Errors in Distributed Decode-Then-Fuse Sensor Networks
Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Zhong, Xionghu
2015-01-01
Decision fusion for distributed detection in sensor networks under non-ideal channels is investigated in this paper. Usually, the local decisions are transmitted to the fusion center (FC) and decoded, and a fusion rule is then applied to achieve a global decision. We propose an optimal likelihood ratio test (LRT)-based fusion rule to take the uncertainty of the decoded binary data due to modulation, reception mode and communication channel into account. The average bit error rate (BER) is employed to characterize such an uncertainty. Further, the detection performance is analyzed under both non-identical and identical local detection performance indices. In addition, the performance of the proposed method is compared with the existing optimal and suboptimal LRT fusion rules. The results show that the proposed fusion rule is more robust compared to these existing ones. PMID:26251908
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Rixen, Daniel
1996-01-01
We present an optimal preconditioning algorithm that is equally applicable to the dual (FETI) and primal (Balancing) Schur complement domain decomposition methods, and which successfully addresses the problems of subdomain heterogeneities including the effects of large jumps of coefficients. The proposed preconditioner is derived from energy principles and embeds a new coarsening operator that propagates the error globally and accelerates convergence. The resulting iterative solver is illustrated with the solution of highly heterogeneous elasticity problems.
Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Barker, W. Howard
2004-07-01
The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.
Analyses of global sea surface temperature 1856-1991
NASA Astrophysics Data System (ADS)
Kaplan, Alexey; Cane, Mark A.; Kushnir, Yochanan; Clement, Amy C.; Blumenthal, M. Benno; Rajagopalan, Balaji
1998-08-01
Global analyses of monthly sea surface temperature (SST) anomalies from 1856 to 1991 are produced using three statistically based methods: optimal smoothing (OS), the Kaiman filter (KF) and optimal interpolation (OI). Each of these is accompanied by estimates of the error covariance of the analyzed fields. The spatial covariance function these methods require is estimated from the available data; the timemarching model is a first-order autoregressive model again estimated from data. The data input for the analyses are monthly anomalies from the United Kingdom Meteorological Office historical sea surface temperature data set (MOHSST5) [Parker et al., 1994] of the Global Ocean Surface Temperature Atlas (GOSTA) [Bottomley et al., 1990]. These analyses are compared with each other, with GOSTA, and with an analysis generated by projection (P) onto a set of empirical orthogonal functions (as in Smith et al. [1996]). In theory, the quality of the analyses should rank in the order OS, KF, OI, P, and GOSTA. It is found that the first four give comparable results in the data-rich periods (1951-1991), but at times when data is sparse the first three differ significantly from P and GOSTA. At these times the latter two often have extreme and fluctuating values, prima facie evidence of error. The statistical schemes are also verified against data not used in any of the analyses (proxy records derived from corals and air temperature records from coastal and island stations). We also present evidence that the analysis error estimates are indeed indicative of the quality of the products. At most times the OS and KF products are close to the OI product, but at times of especially poor coverage their use of information from other times is advantageous. The methods appear to reconstruct the major features of the global SST field from very sparse data. Comparison with other indications of the El Niño-Southern Oscillation cycle show that the analyses provide usable information on interannual variability as far back as the 1860s.
Moghtadaei, Motahareh; Hashemi Golpayegani, Mohammad Reza; Malekzadeh, Reza
2013-02-07
Identification of squamous dysplasia and esophageal squamous cell carcinoma (ESCC) is of great importance in prevention of cancer incidence. Computer aided algorithms can be very useful for identification of people with higher risks of squamous dysplasia, and ESCC. Such method can limit the clinical screenings to people with higher risks. Different regression methods have been used to predict ESCC and dysplasia. In this paper, a Fuzzy Neural Network (FNN) model is selected for ESCC and dysplasia prediction. The inputs to the classifier are the risk factors. Since the relation between risk factors in the tumor system has a complex nonlinear behavior, in comparison to most of ordinary data, the cost function of its model can have more local optimums. Thus the need for global optimization methods is more highlighted. The proposed method in this paper is a Chaotic Optimization Algorithm (COA) proceeding by the common Error Back Propagation (EBP) local method. Since the model has many parameters, we use a strategy to reduce the dependency among parameters caused by the chaotic series generator. This dependency was not considered in the previous COA methods. The algorithm is compared with logistic regression model as the latest successful methods of ESCC and dysplasia prediction. The results represent a more precise prediction with less mean and variance of error. Copyright © 2012 Elsevier Ltd. All rights reserved.
Zhong, Xungao; Zhong, Xunyu; Peng, Xiafu
2013-10-08
In this paper, a global-state-space visual servoing scheme is proposed for uncalibrated model-independent robotic manipulation. The scheme is based on robust Kalman filtering (KF), in conjunction with Elman neural network (ENN) learning techniques. The global map relationship between the vision space and the robotic workspace is learned using an ENN. This learned mapping is shown to be an approximate estimate of the Jacobian in global space. In the testing phase, the desired Jacobian is arrived at using a robust KF to improve the ENN learning result so as to achieve robotic precise convergence of the desired pose. Meanwhile, the ENN weights are updated (re-trained) using a new input-output data pair vector (obtained from the KF cycle) to ensure robot global stability manipulation. Thus, our method, without requiring either camera or model parameters, avoids the corrupted performances caused by camera calibration and modeling errors. To demonstrate the proposed scheme's performance, various simulation and experimental results have been presented using a six-degree-of-freedom robotic manipulator with eye-in-hand configurations.
Combining multiple decisions: applications to bioinformatics
NASA Astrophysics Data System (ADS)
Yukinawa, N.; Takenouchi, T.; Oba, S.; Ishii, S.
2008-01-01
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods.
Approximate Bayesian Computation by Subset Simulation using hierarchical state-space models
NASA Astrophysics Data System (ADS)
Vakilzadeh, Majid K.; Huang, Yong; Beck, James L.; Abrahamsson, Thomas
2017-02-01
A new multi-level Markov Chain Monte Carlo algorithm for Approximate Bayesian Computation, ABC-SubSim, has recently appeared that exploits the Subset Simulation method for efficient rare-event simulation. ABC-SubSim adaptively creates a nested decreasing sequence of data-approximating regions in the output space that correspond to increasingly closer approximations of the observed output vector in this output space. At each level, multiple samples of the model parameter vector are generated by a component-wise Metropolis algorithm so that the predicted output corresponding to each parameter value falls in the current data-approximating region. Theoretically, if continued to the limit, the sequence of data-approximating regions would converge on to the observed output vector and the approximate posterior distributions, which are conditional on the data-approximation region, would become exact, but this is not practically feasible. In this paper we study the performance of the ABC-SubSim algorithm for Bayesian updating of the parameters of dynamical systems using a general hierarchical state-space model. We note that the ABC methodology gives an approximate posterior distribution that actually corresponds to an exact posterior where a uniformly distributed combined measurement and modeling error is added. We also note that ABC algorithms have a problem with learning the uncertain error variances in a stochastic state-space model and so we treat them as nuisance parameters and analytically integrate them out of the posterior distribution. In addition, the statistical efficiency of the original ABC-SubSim algorithm is improved by developing a novel strategy to regulate the proposal variance for the component-wise Metropolis algorithm at each level. We demonstrate that Self-regulated ABC-SubSim is well suited for Bayesian system identification by first applying it successfully to model updating of a two degree-of-freedom linear structure for three cases: globally, locally and un-identifiable model classes, and then to model updating of a two degree-of-freedom nonlinear structure with Duffing nonlinearities in its interstory force-deflection relationship.
Multi-time Scale Joint Scheduling Method Considering the Grid of Renewable Energy
NASA Astrophysics Data System (ADS)
Zhijun, E.; Wang, Weichen; Cao, Jin; Wang, Xin; Kong, Xiangyu; Quan, Shuping
2018-01-01
Renewable new energy power generation prediction error like wind and light, brings difficulties to dispatch the power system. In this paper, a multi-time scale robust scheduling method is set to solve this problem. It reduces the impact of clean energy prediction bias to the power grid by using multi-time scale (day-ahead, intraday, real time) and coordinating the dispatching power output of various power supplies such as hydropower, thermal power, wind power, gas power and. The method adopts the robust scheduling method to ensure the robustness of the scheduling scheme. By calculating the cost of the abandon wind and the load, it transforms the robustness into the risk cost and optimizes the optimal uncertainty set for the smallest integrative costs. The validity of the method is verified by simulation.
Sjögren, Erik; Nyberg, Joakim; Magnusson, Mats O; Lennernäs, Hans; Hooker, Andrew; Bredberg, Ulf
2011-05-01
A penalized expectation of determinant (ED)-optimal design with a discrete parameter distribution was used to find an optimal experimental design for assessment of enzyme kinetics in a screening environment. A data set for enzyme kinetic data (V(max) and K(m)) was collected from previously reported studies, and every V(max)/K(m) pair (n = 76) was taken to represent a unique drug compound. The design was restricted to 15 samples, an incubation time of up to 40 min, and starting concentrations (C(0)) for the incubation between 0.01 and 100 μM. The optimization was performed by finding the sample times and C(0) returning the lowest uncertainty (S.E.) of the model parameter estimates. Individual optimal designs, one general optimal design and one, for laboratory practice suitable, pragmatic optimal design (OD) were obtained. In addition, a standard design (STD-D), representing a commonly applied approach for metabolic stability investigations, was constructed. Simulations were performed for OD and STD-D by using the Michaelis-Menten (MM) equation, and enzyme kinetic parameters were estimated with both MM and a monoexponential decay. OD generated a better result (relative standard error) for 99% of the compounds and an equal or better result [(root mean square error (RMSE)] for 78% of the compounds in estimation of metabolic intrinsic clearance. Furthermore, high-quality estimates (RMSE < 30%) of both V(max) and K(m) could be obtained for a considerable number (26%) of the investigated compounds by using the suggested OD. The results presented in this study demonstrate that the output could generally be improved compared with that obtained from the standard approaches used today.
NASA Astrophysics Data System (ADS)
Karstedt, Jörg; Ogrzewalla, Jürgen; Severin, Christopher; Pischinger, Stefan
In this work, the concept development, system layout, component simulation and the overall DOE system optimization of a HT-PEM fuel cell APU with a net electric power output of 4.5 kW and an onboard methane fuel processor are presented. A highly integrated system layout has been developed that enables fast startup within 7.5 min, a closed system water balance and high fuel processor efficiencies of up to 85% due to the recuperation of the anode offgas burner heat. The integration of the system battery into the load management enhances the transient electric performance and the maximum electric power output of the APU system. Simulation models of the carbon monoxide influence on HT-PEM cell voltage, the concentration and temperature profiles within the autothermal reformer (ATR) and the CO conversion rates within the watergas shift stages (WGSs) have been developed. They enable the optimization of the CO concentration in the anode gas of the fuel cell in order to achieve maximum system efficiencies and an optimized dimensioning of the ATR and WGS reactors. Furthermore a DOE optimization of the global system parameters cathode stoichiometry, anode stoichiometry, air/fuel ratio and steam/carbon ratio of the fuel processing system has been performed in order to achieve maximum system efficiencies for all system operating points under given boundary conditions.
Reliability-Based Control Design for Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.
Output Error Analysis of Planar 2-DOF Five-bar Mechanism
NASA Astrophysics Data System (ADS)
Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang
2018-03-01
Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.
iTOUGH2: A multiphysics simulation-optimization framework for analyzing subsurface systems
NASA Astrophysics Data System (ADS)
Finsterle, S.; Commer, M.; Edmiston, J. K.; Jung, Y.; Kowalsky, M. B.; Pau, G. S. H.; Wainwright, H. M.; Zhang, Y.
2017-11-01
iTOUGH2 is a simulation-optimization framework for the TOUGH suite of nonisothermal multiphase flow models and related simulators of geophysical, geochemical, and geomechanical processes. After appropriate parameterization of subsurface structures and their properties, iTOUGH2 runs simulations for multiple parameter sets and analyzes the resulting output for parameter estimation through automatic model calibration, local and global sensitivity analyses, data-worth analyses, and uncertainty propagation analyses. Development of iTOUGH2 is driven by scientific challenges and user needs, with new capabilities continually added to both the forward simulator and the optimization framework. This review article provides a summary description of methods and features implemented in iTOUGH2, and discusses the usefulness and limitations of an integrated simulation-optimization workflow in support of the characterization and analysis of complex multiphysics subsurface systems.
Theoretical model for design and analysis of protectional eyewear.
Zelzer, B; Speck, A; Langenbucher, A; Eppig, T
2013-05-01
Protectional eyewear has to fulfill both mechanical and optical stress tests. To pass those optical tests the surfaces of safety spectacles have to be optimized to minimize optical aberrations. Starting with the surface data of three measured safety spectacles, a theoretical spectacle model (four spherical surfaces) is recalculated first and then optimized while keeping the front surface unchanged. Next to spherical power, astigmatic power and prism imbalance we used the wavefront error (five different viewing directions) to simulate the optical performance and to optimize the safety spectacle geometries. All surfaces were spherical (maximum global deviation 'peak-to-valley' between the measured surface and the best-fit sphere: 0.132mm). Except the spherical power of the model Axcont (-0.07m(-1)) all simulated optical performance before optimization was better than the limits defined by standards. The optimization reduced the wavefront error by 1% to 0.150 λ (Windor/Infield), by 63% to 0.194 λ (Axcont/Bolle) and by 55% to 0.199 λ (2720/3M) without dropping below the measured thickness. The simulated optical performance of spectacle designs could be improved when using a smart optimization. A good optical design counteracts degradation by parameter variation throughout the manufacturing process. Copyright © 2013. Published by Elsevier GmbH.
Distributed Adaptive Fuzzy Control for Nonlinear Multiagent Systems Via Sliding Mode Observers.
Shen, Qikun; Shi, Peng; Shi, Yan
2016-12-01
In this paper, the problem of distributed adaptive fuzzy control is investigated for high-order uncertain nonlinear multiagent systems on directed graph with a fixed topology. It is assumed that only the outputs of each follower and its neighbors are available in the design of its distributed controllers. Equivalent output injection sliding mode observers are proposed for each follower to estimate the states of itself and its neighbors, and an observer-based distributed adaptive controller is designed for each follower to guarantee that it asymptotically synchronizes to a leader with tracking errors being semi-globally uniform ultimate bounded, in which fuzzy logic systems are utilized to approximate unknown functions. Based on algebraic graph theory and Lyapunov function approach, using Filippov-framework, the closed-loop system stability analysis is conducted. Finally, numerical simulations are provided to illustrate the effectiveness and potential of the developed design techniques.
Finite-horizon control-constrained nonlinear optimal control using single network adaptive critics.
Heydari, Ali; Balakrishnan, Sivasubramanya N
2013-01-01
To synthesize fixed-final-time control-constrained optimal controllers for discrete-time nonlinear control-affine systems, a single neural network (NN)-based controller called the Finite-horizon Single Network Adaptive Critic is developed in this paper. Inputs to the NN are the current system states and the time-to-go, and the network outputs are the costates that are used to compute optimal feedback control. Control constraints are handled through a nonquadratic cost function. Convergence proofs of: 1) the reinforcement learning-based training method to the optimal solution; 2) the training error; and 3) the network weights are provided. The resulting controller is shown to solve the associated time-varying Hamilton-Jacobi-Bellman equation and provide the fixed-final-time optimal solution. Performance of the new synthesis technique is demonstrated through different examples including an attitude control problem wherein a rigid spacecraft performs a finite-time attitude maneuver subject to control bounds. The new formulation has great potential for implementation since it consists of only one NN with single set of weights and it provides comprehensive feedback solutions online, though it is trained offline.
Optimal sequential measurements for bipartite state discrimination
NASA Astrophysics Data System (ADS)
Croke, Sarah; Barnett, Stephen M.; Weir, Graeme
2017-05-01
State discrimination is a useful test problem with which to clarify the power and limitations of different classes of measurement. We consider the problem of discriminating between given states of a bipartite quantum system via sequential measurement of the subsystems, with classical feed-forward of measurement results. Our aim is to understand when sequential measurements, which are relatively easy to implement experimentally, perform as well, or almost as well, as optimal joint measurements, which are in general more technologically challenging. We construct conditions that the optimal sequential measurement must satisfy, analogous to the well-known Helstrom conditions for minimum error discrimination in the unrestricted case. We give several examples and compare the optimal probability of correctly identifying the state via global versus sequential measurement strategies.
B-spline goal-oriented error estimators for geometrically nonlinear rods
2011-04-01
respectively, for the output functionals q2–q4 (linear and nonlinear with the trigonometric functions sine and cosine) in all the tests considered...of the errors resulting from the linear, quadratic and nonlinear (with trigonometric functions sine and cosine) outputs and for p = 1, 2. If the... Portugal . References [1] A.T. Adams. Sobolev Spaces. Academic Press, Boston, 1975. [2] M. Ainsworth and J.T. Oden. A posteriori error estimation in
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.
Bastian, Nathaniel D; Ekin, Tahir; Kang, Hyojung; Griffin, Paul M; Fulton, Lawrence V; Grannan, Benjamin C
2017-06-01
The management of hospitals within fixed-input health systems such as the U.S. Military Health System (MHS) can be challenging due to the large number of hospitals, as well as the uncertainty in input resources and achievable outputs. This paper introduces a stochastic multi-objective auto-optimization model (SMAOM) for resource allocation decision-making in fixed-input health systems. The model can automatically identify where to re-allocate system input resources at the hospital level in order to optimize overall system performance, while considering uncertainty in the model parameters. The model is applied to 128 hospitals in the three services (Air Force, Army, and Navy) in the MHS using hospital-level data from 2009 - 2013. The results are compared to the traditional input-oriented variable returns-to-scale Data Envelopment Analysis (DEA) model. The application of SMAOM to the MHS increases the expected system-wide technical efficiency by 18 % over the DEA model while also accounting for uncertainty of health system inputs and outputs. The developed method is useful for decision-makers in the Defense Health Agency (DHA), who have a strategic level objective of integrating clinical and business processes through better sharing of resources across the MHS and through system-wide standardization across the services. It is also less sensitive to data outliers or sampling errors than traditional DEA methods.
Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems
Li, Zhining; Zhang, Yingtang; Yin, Gang
2018-01-01
The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544
Evaluating video digitizer errors
NASA Astrophysics Data System (ADS)
Peterson, C.
2016-01-01
Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.
The precision of a special purpose analog computer in clinical cardiac output determination.
Sullivan, F J; Mroz, E A; Miller, R E
1975-01-01
Three hundred dye-dilution curves taken during our first year of clinical experience with the Waters CO-4 cardiac output computer were analyzed to estimate the errors involved in its use. Provided that calibration is accurate and 5.0 mg of dye are injected for each curve, then the percentage standard deviation of measurement using this computer is about 8.7%. Included in this are the errors inherent in the computer, errors due to baseline drift, errors in the injection of dye and acutal variation of cardiac output over a series of successive determinations. The size of this error is comparable to that involved in manual calculation. The mean value of five successive curves will be within 10% of the real value in 99 cases out of 100. Advances in methodology and equipment are discussed which make calibration simpler and more accurate, and which should also improve the quality of computer determination. A list of suggestions is given to minimize the errors involved in the clinical use of this equipment. Images Fig. 4. PMID:1089394
Predicting the sky from 30 MHz to 800 GHz: the extended Global Sky Model
NASA Astrophysics Data System (ADS)
Liu, Adrian
We propose to construct the extended Global Sky Model (eGSM), a software package and associated data products that are capable of generating maps of the sky at any frequency within a broad range (30 MHz to 800 GHz). The eGSM is constructed from archival data, and its outputs will include not only "best estimate" sky maps, but also accurate error bars and the ability to generate random realizations of missing modes in the input data. Such views of the sky are crucial in the practice of precision cosmology, where our ability to constrain cosmological parameters and detect new phenomena (such as B-mode signatures from primordial gravitational waves, or spectral distortions of the Cosmic Microwave Background; CMB) rests crucially on our ability to remove systematic foreground contamination. Doing so requires empirical measurements of the foreground sky brightness (such as that arising from Galactic synchrotron radiation, among other sources), which are typically performed only at select narrow wavelength ranges. We aim to transcend traditional wavelength limits by optimally combining existing data to provide a comprehensive view of the foreground sky at any frequency within the broad range of 30 MHz to 800 GHz. Previous efforts to interpolate between multi-frequency maps resulted in the Global Sky Model (GSM) of de Oliveira-Costa et al. (2008), a software package that outputs foreground maps at any frequency of the user's choosing between 10 MHz and 100 GHz. However, the GSM has a number of shortcomings. First and foremost, the GSM does not include the latest archival data from the Planck satellite. Multi-frequency models depend crucially on data from Planck, WMAP, and COBE to provide high-frequency "anchor" maps. Another crucial shortcoming is the lack of error bars in the output maps. Finally, the GSM is only able to predict temperature (i.e., total intensity) maps, and not polarization information. With the recent release of Planck's polarized data products, the time is ripe for the inclusion of polarization and a general update of the GSM. In its first two phases, our proposed eGSM project will incorporate new data and improve analysis methods to eliminate all of the aforementioned flaws. The eGSM will have broad implications for future cosmological probes, including surveys of the highly redshifted 21 cm line (such as the proposed Dark Ages Radio Explorer satellite mission) and CMB experiments (such as the Primordial Inflation Polarization Explorer and the Primordial Inflation Explorer) targeting primordial B-mode polarization or spectral distortions. Forecasting exercises for such future experiments must include polarized foregrounds below current detection limits. The third phase of the eGSM will result in a software package that provides random realizations of dim polarized foregrounds that are below the sensitivities of current instruments. This requires the quantification of non-Gaussian and non-isotropic statistics of existing foreground surveys, adding value to existing archival maps. eGSM data products will be publicly hosted on the Legacy Archive for Microwave Background Data Analysis (LAMBDA) archive, including a publicly released code that enables future foreground surveys (whether ground-based or space-based) to easily incorporate additional data into the existing archive, further refining our model and maximizing the impact of existing archives beyond the lifetime of this proposal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, D P; Ritts, W D; Wharton, S
2009-02-26
The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors.more » FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.« less
2012-01-01
To build a life cycle assessment (LCA) database of Japanese products embracing their global supply chains in a manner requiring lower time and labor burdens, this study estimates the intensity of embodied global environmental burden for commodities produced in Japan. The intensity of embodied global environmental burden is a measure of the environmental burden generated globally by unit production of the commodity and can be used as life cycle inventory data in LCA. The calculation employs an input–output LCA method with a global link input–output model that defines a global system boundary grounded in a simplified multiregional input–output framework. As results, the intensities of embodied global environmental burden for 406 Japanese commodities are determined in terms of energy consumption, greenhouse-gas emissions (carbon dioxide, methane, nitrous oxide, perfluorocarbons, hydrofluorocarbons, sulfur hexafluoride, and their summation), and air-pollutant emissions (nitrogen oxide and sulfur oxide). The uncertainties in the intensities of embodied global environmental burden attributable to the simplified structure of the global link input–output model are quantified using Monte Carlo simulation. In addition, by analyzing the structure of the embodied global greenhouse-gas intensities we characterize Japanese commodities in the context of LCA embracing global supply chains. PMID:22881452
NASA Astrophysics Data System (ADS)
Bagheri Tolabi, Hajar; Hosseini, Rahil; Shakarami, Mahmoud Reza
2016-06-01
This article presents a novel hybrid optimization approach for a nonlinear controller of a distribution static compensator (DSTATCOM). The DSTATCOM is connected to a distribution system with the distributed generation units. The nonlinear control is based on partial feedback linearization. Two proportional-integral-derivative (PID) controllers regulate the voltage and track the output in this control system. In the conventional scheme, the trial-and-error method is used to determine the PID controller coefficients. This article uses a combination of a fuzzy system, simulated annealing (SA) and intelligent water drops (IWD) algorithms to optimize the parameters of the controllers. The obtained results reveal that the response of the optimized controlled system is effectively improved by finding a high-quality solution. The results confirm that using the tuning method based on the fuzzy-SA-IWD can significantly decrease the settling and rising times, the maximum overshoot and the steady-state error of the voltage step response of the DSTATCOM. The proposed hybrid tuning method for the partial feedback linearizing (PFL) controller achieved better regulation of the direct current voltage for the capacitor within the DSTATCOM. Furthermore, in the event of a fault the proposed controller tuned by the fuzzy-SA-IWD method showed better performance than the conventional controller or the PFL controller without optimization by the fuzzy-SA-IWD method with regard to both fault duration and clearing times.
Hybrid Transverse Polar Navigation for High-Precision and Long-Term INSs
Wu, Qiuping; Zhang, Rong; Hu, Peida; Li, Haixia
2018-01-01
Transverse navigation has been proposed to help inertial navigation systems (INSs) fill the gap of polar navigation ability. However, as the transverse system does not have the ability of navigate globally, a complicated switch between the transverse and the traditional algorithms is necessary when the system moves across the polar circles. To maintain the inner continuity and consistency of the core algorithm, a hybrid transverse polar navigation is proposed in this research based on a combination of Earth-fixed-frame mechanization and transverse-frame outputs. Furthermore, a thorough analysis of kinematic error characteristics, proper damping technology and corresponding long-term contributions of main error sources is conducted for the high-precision INSs. According to the analytical expressions of the long-term navigation errors in polar areas, the 24-h period symmetrical oscillation with a slowly divergent amplitude dominates the transverse horizontal position errors, and the first-order drift dominates the transverse azimuth error, which results from the g0 gyro drift coefficients that occur in corresponding directions. Simulations are conducted to validate the theoretical analysis and the deduced analytical expressions. The results show that the proposed hybrid transverse navigation can ensure the same accuracy and oscillation characteristics in polar areas as the traditional algorithm in low and mid latitude regions. PMID:29757242
Hybrid Transverse Polar Navigation for High-Precision and Long-Term INSs.
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Zhang, Rong; Hu, Peida; Li, Haixia
2018-05-12
Transverse navigation has been proposed to help inertial navigation systems (INSs) fill the gap of polar navigation ability. However, as the transverse system does not have the ability of navigate globally, a complicated switch between the transverse and the traditional algorithms is necessary when the system moves across the polar circles. To maintain the inner continuity and consistency of the core algorithm, a hybrid transverse polar navigation is proposed in this research based on a combination of Earth-fixed-frame mechanization and transverse-frame outputs. Furthermore, a thorough analysis of kinematic error characteristics, proper damping technology and corresponding long-term contributions of main error sources is conducted for the high-precision INSs. According to the analytical expressions of the long-term navigation errors in polar areas, the 24-h period symmetrical oscillation with a slowly divergent amplitude dominates the transverse horizontal position errors, and the first-order drift dominates the transverse azimuth error, which results from the gyro drift coefficients that occur in corresponding directions. Simulations are conducted to validate the theoretical analysis and the deduced analytical expressions. The results show that the proposed hybrid transverse navigation can ensure the same accuracy and oscillation characteristics in polar areas as the traditional algorithm in low and mid latitude regions.
Linear and Order Statistics Combiners for Pattern Classification
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)
2001-01-01
Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.
NASA Astrophysics Data System (ADS)
Rizvi, Syed S.; Shah, Dipali; Riasat, Aasia
The Time Wrap algorithm [3] offers a run time recovery mechanism that deals with the causality errors. These run time recovery mechanisms consists of rollback, anti-message, and Global Virtual Time (GVT) techniques. For rollback, there is a need to compute GVT which is used in discrete-event simulation to reclaim the memory, commit the output, detect the termination, and handle the errors. However, the computation of GVT requires dealing with transient message problem and the simultaneous reporting problem. These problems can be dealt in an efficient manner by the Samadi's algorithm [8] which works fine in the presence of causality errors. However, the performance of both Time Wrap and Samadi's algorithms depends on the latency involve in GVT computation. Both algorithms give poor latency for large simulation systems especially in the presence of causality errors. To improve the latency and reduce the processor ideal time, we implement tree and butterflies barriers with the optimistic algorithm. Our analysis shows that the use of synchronous barriers such as tree and butterfly with the optimistic algorithm not only minimizes the GVT latency but also minimizes the processor idle time.
Spectral purity study for IPDA lidar measurement of CO2
NASA Astrophysics Data System (ADS)
Ma, Hui; Liu, Dong; Xie, Chen-Bo; Tan, Min; Deng, Qian; Xu, Ji-Wei; Tian, Xiao-Min; Wang, Zhen-Zhu; Wang, Bang-Xin; Wang, Ying-Jian
2018-02-01
A high sensitivity and global covered observation of carbon dioxide (CO2) is expected by space-borne integrated path differential absorption (IPDA) lidar which has been designed as the next generation measurement. The stringent precision of space-borne CO2 data, for example 1ppm or better, is required to address the largest number of carbon cycle science questions. Spectral purity, which is defined as the ratio of effective absorbed energy to the total energy transmitted, is one of the most important system parameters of IPDA lidar which directly influences the precision of CO2. Due to the column averaged dry air mixing ratio of CO2 is inferred from comparison of the two echo pulse signals, the laser output usually accompanied by an unexpected spectrally broadband background radiation would posing significant systematic error. In this study, the spectral energy density line shape and spectral impurity line shape are modeled as Lorentz line shape for the simulation, and the latter is assumed as an unabsorbed component by CO2. An error equation is deduced according to IPDA detecting theory for calculating the system error caused by spectral impurity. For a spectral purity of 99%, the induced error could reach up to 8.97 ppm.
Optimization of fixture layouts of glass laser optics using multiple kernel regression.
Su, Jianhua; Cao, Enhua; Qiao, Hong
2014-05-10
We aim to build an integrated fixturing model to describe the structural properties and thermal properties of the support frame of glass laser optics. Therefore, (a) a near global optimal set of clamps can be computed to minimize the surface shape error of the glass laser optic based on the proposed model, and (b) a desired surface shape error can be obtained by adjusting the clamping forces under various environmental temperatures based on the model. To construct the model, we develop a new multiple kernel learning method and call it multiple kernel support vector functional regression. The proposed method uses two layer regressions to group and order the data sources by the weights of the kernels and the factors of the layers. Because of that, the influences of the clamps and the temperature can be evaluated by grouping them into different layers.
Gradient ascent pulse engineering approach to CNOT gates in donor electron spin quantum computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, D.-B.; Goan, H.-S.
2008-11-07
In this paper, we demonstrate how gradient ascent pulse engineering (GRAPE) optimal control methods can be implemented on donor electron spin qubits in semiconductors with an architecture complementary to the original Kane's proposal. We focus on the high fidelity controlled-NOT (CNOT) gate and we explicitly find the digitized control sequences for a controlled-NOT gate by optimizing its fidelity using the effective, reduced donor electron spin Hamiltonian with external controls over the hyperfine A and exchange J interactions. We then simulate the CNOT-gate sequence with the full spin Hamiltonian and find that it has an error of 10{sup -6} that ismore » below the error threshold of 10{sup -4} required for fault-tolerant quantum computation. Also the CNOT gate operation time of 100 ns is 3 times faster than 297 ns of the proposed global control scheme.« less
Reducing uncertainties in decadal variability of the global carbon budget with multiple datasets
Li, Wei; Ciais, Philippe; Wang, Yilong; Peng, Shushi; Broquet, Grégoire; Ballantyne, Ashley P.; Canadell, Josep G.; Cooper, Leila; Friedlingstein, Pierre; Le Quéré, Corinne; Myneni, Ranga B.; Peters, Glen P.; Piao, Shilong; Pongratz, Julia
2016-01-01
Conventional calculations of the global carbon budget infer the land sink as a residual between emissions, atmospheric accumulation, and the ocean sink. Thus, the land sink accumulates the errors from the other flux terms and bears the largest uncertainty. Here, we present a Bayesian fusion approach that combines multiple observations in different carbon reservoirs to optimize the land (B) and ocean (O) carbon sinks, land use change emissions (L), and indirectly fossil fuel emissions (F) from 1980 to 2014. Compared with the conventional approach, Bayesian optimization decreases the uncertainties in B by 41% and in O by 46%. The L uncertainty decreases by 47%, whereas F uncertainty is marginally improved through the knowledge of natural fluxes. Both ocean and net land uptake (B + L) rates have positive trends of 29 ± 8 and 37 ± 17 Tg C⋅y−2 since 1980, respectively. Our Bayesian fusion of multiple observations reduces uncertainties, thereby allowing us to isolate important variability in global carbon cycle processes. PMID:27799533
Gradient descent for robust kernel-based regression
NASA Astrophysics Data System (ADS)
Guo, Zheng-Chu; Hu, Ting; Shi, Lei
2018-06-01
In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.
Xiao, Xun; Geyer, Veikko F.; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F.
2016-01-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582
NASA Astrophysics Data System (ADS)
Robichaud, A.; Ménard, R.
2014-02-01
Multi-year objective analyses (OA) on a high spatiotemporal resolution for the warm season period (1 May to 31 October) for ground-level ozone and for fine particulate matter (diameter less than 2.5 microns (PM2.5)) are presented. The OA used in this study combines model outputs from the Canadian air quality forecast suite with US and Canadian observations from various air quality surface monitoring networks. The analyses are based on an optimal interpolation (OI) with capabilities for adaptive error statistics for ozone and PM2.5 and an explicit bias correction scheme for the PM2.5 analyses. The estimation of error statistics has been computed using a modified version of the Hollingsworth-Lönnberg (H-L) method. The error statistics are "tuned" using a χ2 (chi-square) diagnostic, a semi-empirical procedure that provides significantly better verification than without tuning. Successful cross-validation experiments were performed with an OA setup using 90% of data observations to build the objective analyses and with the remainder left out as an independent set of data for verification purposes. Furthermore, comparisons with other external sources of information (global models and PM2.5 satellite surface-derived or ground-based measurements) show reasonable agreement. The multi-year analyses obtained provide relatively high precision with an absolute yearly averaged systematic error of less than 0.6 ppbv (parts per billion by volume) and 0.7 μg m-3 (micrograms per cubic meter) for ozone and PM2.5, respectively, and a random error generally less than 9 ppbv for ozone and under 12 μg m-3 for PM2.5. This paper focuses on two applications: (1) presenting long-term averages of OA and analysis increments as a form of summer climatology; and (2) analyzing long-term (decadal) trends and inter-annual fluctuations using OA outputs. The results show that high percentiles of ozone and PM2.5 were both following a general decreasing trend in North America, with the eastern part of the United States showing the most widespread decrease, likely due to more effective pollution controls. Some locations, however, exhibited an increasing trend in the mean ozone and PM2.5, such as the northwestern part of North America (northwest US and Alberta). Conversely, the low percentiles are generally rising for ozone, which may be linked to the intercontinental transport of increased emissions from emerging countries. After removing the decadal trend, the inter-annual fluctuations of the high percentiles are largely explained by the temperature fluctuations for ozone and to a lesser extent by precipitation fluctuations for PM2.5. More interesting is the economic short-term change (as expressed by the variation of the US gross domestic product growth rate), which explains 37% of the total variance of inter-annual fluctuations of PM2.5 and 15% in the case of ozone.
Scene-based nonuniformity correction algorithm based on interframe registration.
Zuo, Chao; Chen, Qian; Gu, Guohua; Sui, Xiubao
2011-06-01
In this paper, we present a simple and effective scene-based nonuniformity correction (NUC) method for infrared focal plane arrays based on interframe registration. This method estimates the global translation between two adjacent frames and minimizes the mean square error between the two properly registered images to make any two detectors with the same scene produce the same output value. In this way, the accumulation of the registration error can be avoided and the NUC can be achieved. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of the proposed technique is thoroughly studied with infrared image sequences with simulated nonuniformity and infrared imagery with real nonuniformity. It shows a significantly fast and reliable fixed-pattern noise reduction and obtains an effective frame-by-frame adaptive estimation of each detector's gain and offset.
Experiments on active isolation using distributed PVDF error sensors
NASA Technical Reports Server (NTRS)
Lefebvre, S.; Guigou, C.; Fuller, C. R.
1992-01-01
A control system based on a two-channel narrow-band LMS algorithm is used to isolate periodic vibration at low frequencies on a structure composed of a rigid top plate mounted on a flexible receiving plate. The control performance of distributed PVDF error sensors and accelerometer point sensors is compared. For both sensors, high levels of global reduction, up to 32 dB, have been obtained. It is found that, by driving the PVDF strip output voltage to zero, the controller may force the structure to vibrate so that the integration of the strain under the length of the PVDF strip is zero. This ability of the PVDF sensors to act as spatial filters is especially relevant in active control of sound radiation. It is concluded that the PVDF sensors are flexible, nonfragile, and inexpensive and can be used as strain sensors for active control applications of vibration isolation and sound radiation.
Hovakimyan, N; Nardi, F; Calise, A; Kim, Nakwan
2002-01-01
We consider adaptive output feedback control of uncertain nonlinear systems, in which both the dynamics and the dimension of the regulated system may be unknown. However, the relative degree of the regulated output is assumed to be known. Given a smooth reference trajectory, the problem is to design a controller that forces the system measurement to track it with bounded errors. The classical approach requires a state observer. Finding a good observer for an uncertain nonlinear system is not an obvious task. We argue that it is sufficient to build an observer for the output tracking error. Ultimate boundedness of the error signals is shown through Lyapunov's direct method. The theoretical results are illustrated in the design of a controller for a fourth-order nonlinear system of relative degree two and a high-bandwidth attitude command system for a model R-50 helicopter.
Balancing the stochastic description of uncertainties as a function of hydrologic model complexity
NASA Astrophysics Data System (ADS)
Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.
2016-12-01
Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.
NASA Astrophysics Data System (ADS)
Taylor, Thomas E.; L'Ecuyer, Tristan; Slusser, James; Stephens, Graeme; Krotkov, Nick; Davis, John; Goering, Christian
2005-08-01
Extensive sensitivity and error characteristics of a recently developed optimal estimation retrieval algorithm which simultaneously determines aerosol optical depth (AOD), aerosol single scatter albedo (SSA) and total ozone column (TOC) from ultra-violet irradiances are described. The algorithm inverts measured diffuse and direct irradiances at 7 channels in the UV spectral range obtained from the United States Department of Agriculture's (USDA) UV-B Monitoring and Research Program's (UVMRP) network of 33 ground-based UV-MFRSR instruments to produce aerosol optical properties and TOC at all seven wavelengths. Sensitivity studies of the Tropospheric Ultra-violet/Visible (TUV) radiative transfer model performed for various operating modes (Delta-Eddington versus n-stream Discrete Ordinate) over domains of AOD, SSA, TOC, asymmetry parameter and surface albedo show that the solutions are well constrained. Realistic input error budgets and diagnostic and error outputs from the retrieval are analyzed to demonstrate the atmospheric conditions under which the retrieval provides useful and significant results. After optimizing the algorithm for the USDA site in Panther Junction, Texas the retrieval algorithm was run on a cloud screened set of irradiance measurements for the month of May 2003. Comparisons to independently derived AOD's are favorable with root mean square (RMS) differences of about 3% to 7% at 300nm and less than 1% at 368nm, on May 12 and 22, 2003. This retrieval method will be used to build an aerosol climatology and provide ground-truthing of satellite measurements by running it operationally on the USDA UV network database.
Yang, Anxiong; Stingl, Michael; Berry, David A.; Lohscheller, Jörg; Voigt, Daniel; Eysholdt, Ulrich; Döllinger, Michael
2011-01-01
With the use of an endoscopic, high-speed camera, vocal fold dynamics may be observed clinically during phonation. However, observation and subjective judgment alone may be insufficient for clinical diagnosis and documentation of improved vocal function, especially when the laryngeal disease lacks any clear morphological presentation. In this study, biomechanical parameters of the vocal folds are computed by adjusting the corresponding parameters of a three-dimensional model until the dynamics of both systems are similar. First, a mathematical optimization method is presented. Next, model parameters (such as pressure, tension and masses) are adjusted to reproduce vocal fold dynamics, and the deduced parameters are physiologically interpreted. Various combinations of global and local optimization techniques are attempted. Evaluation of the optimization procedure is performed using 50 synthetically generated data sets. The results show sufficient reliability, including 0.07 normalized error, 96% correlation, and 91% accuracy. The technique is also demonstrated on data from human hemilarynx experiments, in which a low normalized error (0.16) and high correlation (84%) values were achieved. In the future, this technique may be applied to clinical high-speed images, yielding objective measures with which to document improved vocal function of patients with voice disorders. PMID:21877808
Floating-point system quantization errors in digital control systems
NASA Technical Reports Server (NTRS)
Phillips, C. L.
1973-01-01
The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.
NASA Astrophysics Data System (ADS)
Xiong, Zhi; Zhu, J. G.; Xue, B.; Ye, Sh. H.; Xiong, Y.
2013-10-01
As a novel network coordinate measurement system based on multi-directional positioning, workspace Measurement and Positioning System (wMPS) has outstanding advantages of good parallelism, wide measurement range and high measurement accuracy, which makes it to be the research hotspots and important development direction in the field of large-scale measurement. Since station deployment has a significant impact on the measurement range and accuracy, and also restricts the use-cost, the optimization method of station deployment was researched in this paper. Firstly, positioning error model was established. Then focusing on the small network consisted of three stations, the typical deployments and error distribution characteristics were studied. Finally, through measuring the simulated fuselage using typical deployments at the industrial spot and comparing the results with Laser Tracker, some conclusions are obtained. The comparison results show that under existing prototype conditions, I_3 typical deployment of which three stations are distributed in a straight line has an average error of 0.30 mm and the maximum error is 0.50 mm in the range of 12 m. Meanwhile, C_3 typical deployment of which three stations are uniformly distributed in the half-circumference of an circle has an average error of 0.17 mm and the maximum error is 0.28 mm. Obviously, C_3 typical deployment has a higher control effect on precision than I_3 type. The research work provides effective theoretical support for global measurement network optimization in the future work.
Blind system identification of two-thermocouple sensor based on cross-relation method.
Li, Yanfeng; Zhang, Zhijie; Hao, Xiaojian
2018-03-01
In dynamic temperature measurement, the dynamic characteristics of the sensor affect the accuracy of the measurement results. Thermocouples are widely used for temperature measurement in harsh conditions due to their low cost, robustness, and reliability, but because of the presence of the thermal inertia, there is a dynamic error in the dynamic temperature measurement. In order to eliminate the dynamic error, two-thermocouple sensor was used to measure dynamic gas temperature in constant velocity flow environments in this paper. Blind system identification of two-thermocouple sensor based on a cross-relation method was carried out. Particle swarm optimization algorithm was used to estimate time constants of two thermocouples and compared with the grid based search method. The method was validated on the experimental equipment built by using high temperature furnace, and the input dynamic temperature was reconstructed by using the output data of the thermocouple with small time constant.
Study on the stability and reliability of Clinotron at Y-band
NASA Astrophysics Data System (ADS)
Li, Shuang; Wang, Jianguo; Chen, Zaigao; Wang, Guangqiang; Wang, Dongyang; Teng, Yan
2017-11-01
To improve the stability and reliability of Clinotron at the Y-band, some key issues are researched, such as the synchronous operating mode, the heat accumulation on the slow-wave structure, and the errors in micro-fabrication. By analyzing the dispersion relationship, the working mode is determined as the TM10 mode. The problem of heat dissipation on a comb is researched to make a trade-off on the choice of suitable working conditions, making sure that the safety and efficiency of the device are guaranteed simultaneously. The study on the effect of tolerance on device's performance is also conducted to determine the acceptable error during micro-fabrication. The validity of the device and the cost for fabrication are both taken into consideration. At last, the performance of Clinotron under the optimized conditions demonstrates that it can work steadily at 315.89 GHz and the output power is about 12 W, showing advanced stability and reliability.
Blind system identification of two-thermocouple sensor based on cross-relation method
NASA Astrophysics Data System (ADS)
Li, Yanfeng; Zhang, Zhijie; Hao, Xiaojian
2018-03-01
In dynamic temperature measurement, the dynamic characteristics of the sensor affect the accuracy of the measurement results. Thermocouples are widely used for temperature measurement in harsh conditions due to their low cost, robustness, and reliability, but because of the presence of the thermal inertia, there is a dynamic error in the dynamic temperature measurement. In order to eliminate the dynamic error, two-thermocouple sensor was used to measure dynamic gas temperature in constant velocity flow environments in this paper. Blind system identification of two-thermocouple sensor based on a cross-relation method was carried out. Particle swarm optimization algorithm was used to estimate time constants of two thermocouples and compared with the grid based search method. The method was validated on the experimental equipment built by using high temperature furnace, and the input dynamic temperature was reconstructed by using the output data of the thermocouple with small time constant.
SPS pilot signal design and power transponder analysis, volume 2, phase 3
NASA Technical Reports Server (NTRS)
Lindsey, W. C.; Scholtz, R. A.; Chie, C. M.
1980-01-01
The problem of pilot signal parameter optimization and the related problem of power transponder performance analysis for the Solar Power Satellite reference phase control system are addressed. Signal and interference models were established to enable specifications of the front end filters including both the notch filter and the antenna frequency response. A simulation program package was developed to be included in SOLARSIM to perform tradeoffs of system parameters based on minimizing the phase error for the pilot phase extraction. An analytical model that characterizes the overall power transponder operation was developed. From this model, the effects of different phase noise disturbance sources that contribute to phase variations at the output of the power transponders were studied and quantified. Results indicate that it is feasible to hold the antenna array phase error to less than one degree per power module for the type of disturbances modeled.
NASA Astrophysics Data System (ADS)
Makrakis, Dimitrios; Mathiopoulos, P. Takis
A maximum likelihood sequential decoder for the reception of digitally modulated signals with single or multiamplitude constellations transmitted over a multiplicative, nonselective fading channel is derived. It is shown that its structure consists of a combination of envelope, multiple differential, and coherent detectors. The outputs of each of these detectors are jointly processed by means of an algorithm. This algorithm is presented in a recursive form. The derivation of the new receiver is general enough to accommodate uncoded as well as coded (e.g., trellis-coded) schemes. Performance evaluation results for a reduced-complexity trellis-coded QPSK system have demonstrated that the proposed receiver dramatically reduces the error floors caused by fading. At Eb/N0 = 20 dB the new receiver structure results in bit-error-rate reductions of more than three orders of magnitude compared to a conventional Viterbi receiver, while being reasonably simple to implement.
Lexical morphology and its role in the writing process: evidence from a case of acquired dysgraphia.
Badecker, W; Hillis, A; Caramazza, A
1990-06-01
A case of acquired dysgraphia is presented in which the deficit is attributed to an impairment at the level of the Graphemic Output Buffer. It is argued that this patient's performance can be used to identify the representational character of the processing units that are stored in the Orthographic Output Lexicon. In particular, it is argued that the distribution of spelling errors and the types of lexical items which affect error rates indicate that the lexical representations passed from the lexical output system to the Graphemic Output Buffer correspond to the productive morphemes of the language.
Automated optimal coordination of multiple-DOF neuromuscular actions in feedforward neuroprostheses.
Lujan, J Luis; Crago, Patrick E
2009-01-01
This paper describes a new method for designing feedforward controllers for multiple-muscle, multiple-DOF, motor system neural prostheses. The design process is based on experimental measurement of the forward input/output properties of the neuromechanical system and numerical optimization of stimulation patterns to meet muscle coactivation criteria, thus resolving the muscle redundancy (i.e., overcontrol) and the coupled DOF problems inherent in neuromechanical systems. We designed feedforward controllers to control the isometric forces at the tip of the thumb in two directions during stimulation of three thumb muscles as a model system. We tested the method experimentally in ten able-bodied individuals and one patient with spinal cord injury. Good control of isometric force in both DOFs was observed, with rms errors less than 10% of the force range in seven experiments and statistically significant correlations between the actual and target forces in all ten experiments. Systematic bias and slope errors were observed in a few experiments, likely due to the neuromuscular fatigue. Overall, the tests demonstrated the ability of a general design approach to satisfy both control and coactivation criteria in multiple-muscle, multiple-axis neuromechanical systems, which is applicable to a wide range of neuromechanical systems and stimulation electrodes.
Wessel, Jan R.; Aron, Adam R.
2016-01-01
SUMMARY Unexpected events are part of everyday experience. They come in several varieties – action errors, unexpected action outcomes, and unexpected perceptual events – and they lead to motor slowing and cognitive distraction. While different varieties of unexpected events have been studied largely independently, and many different mechanisms are thought to explain their effects on action and cognition, we suggest a unifying theory. We propose that unexpected events recruit a fronto-basal-ganglia network for stopping. This network includes specific prefrontal cortical nodes and is posited to project to the subthalamic nucleus, with a putative global suppressive effect on basal-ganglia output. We argue that unexpected events interrupt action and impact cognition, partly at least, by recruiting this global suppressive network. This provides a common mechanistic basis for different types of unexpected events, links the literatures on motor inhibition, performance-monitoring, attention, and working memory, and is relevant for understanding clinical symptoms of distractibility and mental inflexibility. PMID:28103476
Wessel, Jan R; Aron, Adam R
2017-01-18
Unexpected events are part of everyday experience. They come in several varieties-action errors, unexpected action outcomes, and unexpected perceptual events-and they lead to motor slowing and cognitive distraction. While different varieties of unexpected events have been studied largely independently, and many different mechanisms are thought to explain their effects on action and cognition, we suggest a unifying theory. We propose that unexpected events recruit a fronto-basal-ganglia network for stopping. This network includes specific prefrontal cortical nodes and is posited to project to the subthalamic nucleus, with a putative global suppressive effect on basal-ganglia output. We argue that unexpected events interrupt action and impact cognition, partly at least, by recruiting this global suppressive network. This provides a common mechanistic basis for different types of unexpected events; links the literatures on motor inhibition, performance monitoring, attention, and working memory; and is relevant for understanding clinical symptoms of distractibility and mental inflexibility. Copyright © 2017 Elsevier Inc. All rights reserved.
SU-E-T-257: Output Constancy: Reducing Measurement Variations in a Large Practice Group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hedrick, K; Fitzgerald, T; Miller, R
2014-06-01
Purpose: To standardize output constancy check procedures in a large medical physics practice group covering multiple sites, in order to identify and reduce small systematic errors caused by differences in equipment and the procedures of multiple physicists. Methods: A standardized machine output constancy check for both photons and electrons was instituted within the practice group in 2010. After conducting annual TG-51 measurements in water and adjusting the linac to deliver 1.00 cGy/MU at Dmax, an acrylic phantom (comparable at all sites) and PTW farmer ion chamber are used to obtain monthly output constancy reference readings. From the collected charge reading,more » measurements of air pressure and temperature, and chamber Ndw and Pelec, a value we call the Kacrylic factor is determined, relating the chamber reading in acrylic to the dose in water with standard set-up conditions. This procedure easily allows for multiple equipment combinations to be used at any site. The Kacrylic factors and output results from all sites and machines are logged monthly in a central database and used to monitor trends in calibration and output. Results: The practice group consists of 19 sites, currently with 34 Varian and 8 Elekta linacs (24 Varian and 5 Elekta linacs in 2010). Over the past three years, the standard deviation of Kacrylic factors measured on all machines decreased by 20% for photons and high energy electrons as systematic errors were found and reduced. Low energy electrons showed very little change in the distribution of Kacrylic values. Small errors in linac beam data were found by investigating outlier Kacrylic values. Conclusion: While the use of acrylic phantoms introduces an additional source of error through small differences in depth and effective depth, the new standardized procedure eliminates potential sources of error from using many different phantoms and results in more consistent output constancy measurements.« less
Parameterizing sorption isotherms using a hybrid global-local fitting procedure.
Matott, L Shawn; Singh, Anshuman; Rabideau, Alan J
2017-05-01
Predictive modeling of the transport and remediation of groundwater contaminants requires an accurate description of the sorption process, which is usually provided by fitting an isotherm model to site-specific laboratory data. Commonly used calibration procedures, listed in order of increasing sophistication, include: trial-and-error, linearization, non-linear regression, global search, and hybrid global-local search. Given the considerable variability in fitting procedures applied in published isotherm studies, we investigated the importance of algorithm selection through a series of numerical experiments involving 13 previously published sorption datasets. These datasets, considered representative of state-of-the-art for isotherm experiments, had been previously analyzed using trial-and-error, linearization, or non-linear regression methods. The isotherm expressions were re-fit using a 3-stage hybrid global-local search procedure (i.e. global search using particle swarm optimization followed by Powell's derivative free local search method and Gauss-Marquardt-Levenberg non-linear regression). The re-fitted expressions were then compared to previously published fits in terms of the optimized weighted sum of squared residuals (WSSR) fitness function, the final estimated parameters, and the influence on contaminant transport predictions - where easily computed concentration-dependent contaminant retardation factors served as a surrogate measure of likely transport behavior. Results suggest that many of the previously published calibrated isotherm parameter sets were local minima. In some cases, the updated hybrid global-local search yielded order-of-magnitude reductions in the fitness function. In particular, of the candidate isotherms, the Polanyi-type models were most likely to benefit from the use of the hybrid fitting procedure. In some cases, improvements in fitness function were associated with slight (<10%) changes in parameter values, but in other cases significant (>50%) changes in parameter values were noted. Despite these differences, the influence of isotherm misspecification on contaminant transport predictions was quite variable and difficult to predict from inspection of the isotherms. Copyright © 2017 Elsevier B.V. All rights reserved.
A methodology for designing robust multivariable nonlinear control systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Grunberg, D. B.
1986-01-01
A new methodology is described for the design of nonlinear dynamic controllers for nonlinear multivariable systems providing guarantees of closed-loop stability, performance, and robustness. The methodology is an extension of the Linear-Quadratic-Gaussian with Loop-Transfer-Recovery (LQG/LTR) methodology for linear systems, thus hinging upon the idea of constructing an approximate inverse operator for the plant. A major feature of the methodology is a unification of both the state-space and input-output formulations. In addition, new results on stability theory, nonlinear state estimation, and optimal nonlinear regulator theory are presented, including the guaranteed global properties of the extended Kalman filter and optimal nonlinear regulators.
Design and optimization of color lookup tables on a simplex topology.
Monga, Vishal; Bala, Raja; Mo, Xuan
2012-04-01
An important computational problem in color imaging is the design of color transforms that map color between devices or from a device-dependent space (e.g., RGB/CMYK) to a device-independent space (e.g., CIELAB) and vice versa. Real-time processing constraints entail that such nonlinear color transforms be implemented using multidimensional lookup tables (LUTs). Furthermore, relatively sparse LUTs (with efficient interpolation) are employed in practice because of storage and memory constraints. This paper presents a principled design methodology rooted in constrained convex optimization to design color LUTs on a simplex topology. The use of n simplexes, i.e., simplexes in n dimensions, as opposed to traditional lattices, recently has been of great interest in color LUT design for simplex topologies that allow both more analytically tractable formulations and greater efficiency in the LUT. In this framework of n-simplex interpolation, our central contribution is to develop an elegant iterative algorithm that jointly optimizes the placement of nodes of the color LUT and the output values at those nodes to minimize interpolation error in an expected sense. This is in contrast to existing work, which exclusively designs either node locations or the output values. We also develop new analytical results for the problem of node location optimization, which reduces to constrained optimization of a large but sparse interpolation matrix in our framework. We evaluate our n -simplex color LUTs against the state-of-the-art lattice (e.g., International Color Consortium profiles) and simplex-based techniques for approximating two representative multidimensional color transforms that characterize a CMYK xerographic printer and an RGB scanner, respectively. The results show that color LUTs designed on simplexes offer very significant benefits over traditional lattice-based alternatives in improving color transform accuracy even with a much smaller number of nodes.
Optimal Divergence-Free Hatch Filter for GNSS Single-Frequency Measurement.
Park, Byungwoon; Lim, Cheolsoon; Yun, Youngsun; Kim, Euiho; Kee, Changdon
2017-02-24
The Hatch filter is a code-smoothing technique that uses the variation of the carrier phase. It can effectively reduce the noise of a pseudo-range with a very simple filter construction, but it occasionally causes an ionosphere-induced error for low-lying satellites. Herein, we propose an optimal single-frequency (SF) divergence-free Hatch filter that uses a satellite-based augmentation system (SBAS) message to reduce the ionospheric divergence and applies the optimal smoothing constant for its smoothing window width. According to the data-processing results, the overall performance of the proposed filter is comparable to that of the dual frequency (DF) divergence-free Hatch filter. Moreover, it can reduce the horizontal error of 57 cm to 37 cm and improve the vertical accuracy of the conventional Hatch filter by 25%. Considering that SF receivers dominate the global navigation satellite system (GNSS) market and that most of these receivers include the SBAS function, the filter suggested in this paper is of great value in that it can make the differential GPS (DGPS) performance of the low-cost SF receivers comparable to that of DF receivers.
Optimal Divergence-Free Hatch Filter for GNSS Single-Frequency Measurement
Park, Byungwoon; Lim, Cheolsoon; Yun, Youngsun; Kim, Euiho; Kee, Changdon
2017-01-01
The Hatch filter is a code-smoothing technique that uses the variation of the carrier phase. It can effectively reduce the noise of a pseudo-range with a very simple filter construction, but it occasionally causes an ionosphere-induced error for low-lying satellites. Herein, we propose an optimal single-frequency (SF) divergence-free Hatch filter that uses a satellite-based augmentation system (SBAS) message to reduce the ionospheric divergence and applies the optimal smoothing constant for its smoothing window width. According to the data-processing results, the overall performance of the proposed filter is comparable to that of the dual frequency (DF) divergence-free Hatch filter. Moreover, it can reduce the horizontal error of 57 cm to 37 cm and improve the vertical accuracy of the conventional Hatch filter by 25%. Considering that SF receivers dominate the global navigation satellite system (GNSS) market and that most of these receivers include the SBAS function, the filter suggested in this paper is of great value in that it can make the differential GPS (DGPS) performance of the low-cost SF receivers comparable to that of DF receivers. PMID:28245584
A New Ensemble Canonical Correlation Prediction Scheme for Seasonal Precipitation
NASA Technical Reports Server (NTRS)
Kim, Kyu-Myong; Lau, William K. M.; Li, Guilong; Shen, Samuel S. P.; Lau, William K. M. (Technical Monitor)
2001-01-01
Department of Mathematical Sciences, University of Alberta, Edmonton, Canada This paper describes the fundamental theory of the ensemble canonical correlation (ECC) algorithm for the seasonal climate forecasting. The algorithm is a statistical regression sch eme based on maximal correlation between the predictor and predictand. The prediction error is estimated by a spectral method using the basis of empirical orthogonal functions. The ECC algorithm treats the predictors and predictands as continuous fields and is an improvement from the traditional canonical correlation prediction. The improvements include the use of area-factor, estimation of prediction error, and the optimal ensemble of multiple forecasts. The ECC is applied to the seasonal forecasting over various parts of the world. The example presented here is for the North America precipitation. The predictor is the sea surface temperature (SST) from different ocean basins. The Climate Prediction Center's reconstructed SST (1951-1999) is used as the predictor's historical data. The optimally interpolated global monthly precipitation is used as the predictand?s historical data. Our forecast experiments show that the ECC algorithm renders very high skill and the optimal ensemble is very important to the high value.
NASA Astrophysics Data System (ADS)
Meier, Walter Neil
This thesis demonstrates the applicability of data assimilation methods to improve observed and modeled ice motion fields and to demonstrate the effects of assimilated motion on Arctic processes important to the global climate and of practical concern to human activities. Ice motions derived from 85 GHz and 37 GHz SSM/I imagery and estimated from two-dimensional dynamic-thermodynamic sea ice models are compared to buoy observations. Mean error, error standard deviation, and correlation with buoys are computed for the model domain. SSM/I motions generally have a lower bias, but higher error standard deviations and lower correlation with buoys than model motions. There are notable variations in the statistics depending on the region of the Arctic, season, and ice characteristics. Assimilation methods are investigated and blending and optimal interpolation strategies are implemented. Blending assimilation improves error statistics slightly, but the effect of the assimilation is reduced due to noise in the SSM/I motions and is thus not an effective method to improve ice motion estimates. However, optimal interpolation assimilation reduces motion errors by 25--30% over modeled motions and 40--45% over SSM/I motions. Optimal interpolation assimilation is beneficial in all regions, seasons and ice conditions, and is particularly effective in regimes where modeled and SSM/I errors are high. Assimilation alters annual average motion fields. Modeled ice products of ice thickness, ice divergence, Fram Strait ice volume export, transport across the Arctic and interannual basin averages are also influenced by assimilated motions. Assimilation improves estimates of pollutant transport and corrects synoptic-scale errors in the motion fields caused by incorrect forcings or errors in model physics. The portability of the optimal interpolation assimilation method is demonstrated by implementing the strategy in an ice thickness distribution (ITD) model. This research presents an innovative method of combining a new data set of SSM/I-derived ice motions with three different sea ice models via two data assimilation methods. The work described here is the first example of assimilating remotely-sensed data within high-resolution and detailed dynamic-thermodynamic sea ice models. The results demonstrate that assimilation is a valuable resource for determining accurate ice motion in the Arctic.
Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods †
Gonzalez-Navarro, Felix F.; Stilianova-Stoytcheva, Margarita; Renteria-Gutierrez, Livier; Belanche-Muñoz, Lluís A.; Flores-Rios, Brenda L.; Ibarra-Esquer, Jorge E.
2016-01-01
Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB) modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization. PMID:27792165
NASA Astrophysics Data System (ADS)
Chaidee, S.; Pakawanwong, P.; Suppakitpaisarn, V.; Teerasawat, P.
2017-09-01
In this work, we devise an efficient method for the land-use optimization problem based on Laguerre Voronoi diagram. Previous Voronoi diagram-based methods are more efficient and more suitable for interactive design than discrete optimization-based method, but, in many cases, their outputs do not satisfy area constraints. To cope with the problem, we propose a force-directed graph drawing algorithm, which automatically allocates generating points of Voronoi diagram to appropriate positions. Then, we construct a Laguerre Voronoi diagram based on these generating points, use linear programs to adjust each cell, and reconstruct the diagram based on the adjustment. We adopt the proposed method to the practical case study of Chiang Mai University's allocated land for a mixed-use complex. For this case study, compared to other Voronoi diagram-based method, we decrease the land allocation error by 62.557 %. Although our computation time is larger than the previous Voronoi-diagram-based method, it is still suitable for interactive design.
Differential-Evolution Control Parameter Optimization for Unmanned Aerial Vehicle Path Planning
Kok, Kai Yit; Rajendran, Parvathy
2016-01-01
The differential evolution algorithm has been widely applied on unmanned aerial vehicle (UAV) path planning. At present, four random tuning parameters exist for differential evolution algorithm, namely, population size, differential weight, crossover, and generation number. These tuning parameters are required, together with user setting on path and computational cost weightage. However, the optimum settings of these tuning parameters vary according to application. Instead of trial and error, this paper presents an optimization method of differential evolution algorithm for tuning the parameters of UAV path planning. The parameters that this research focuses on are population size, differential weight, crossover, and generation number. The developed algorithm enables the user to simply define the weightage desired between the path and computational cost to converge with the minimum generation required based on user requirement. In conclusion, the proposed optimization of tuning parameters in differential evolution algorithm for UAV path planning expedites and improves the final output path and computational cost. PMID:26943630
Intercontinental height datum connection with GOCE and GPS-levelling data
NASA Astrophysics Data System (ADS)
Gruber, T.; Gerlach, C.; Haagmans, R.
2012-12-01
In this study an attempt is made to establish height system datum connections based upon a gravity field and steady-state ocean circulation explorer (GOCE) gravity field model and a set of global positioning system (GPS) and levelling data. The procedure applied in principle is straightforward. First local geoid heights are obtained point wise from GPS and levelling data. Then the mean of these geoid heights is computed for regions nominally referring to the same height datum. Subsequently, these local mean geoid heights are compared with a mean global geoid from GOCE for the same region. This way one can identify an offset of the local to the global geoid per region. This procedure is applied to a number of regions distributed worldwide. Results show that the vertical datum offset estimates strongly depend on the nature of the omission error, i.e. the signal not represented in the GOCE model. For a smooth gravity field the commission error of GOCE, the quality of the GPS and levelling data and the averaging control the accuracy of the vertical datum offset estimates. In case the omission error does not cancel out in the mean value computation, because of a sub-optimal point distribution or a characteristic behaviour of the omitted part of the geoid signal, one needs to estimate a correction for the omission error from other sources. For areas with dense and high quality ground observations the EGM2008 global model is a good choice to estimate the omission error correction in theses cases. Relative intercontinental height datum offsets are estimated by applying this procedure between the United State of America (USA), Australia and Germany. These are compared to historical values provided in the literature and computed with the same procedure. The results obtained in this study agree on a level of 10 cm to the historical results. The changes mainly can be attributed to the new global geoid information from GOCE, rather than to the ellipsoidal heights or the levelled heights. These historical levelling data are still in use in many countries. This conclusion is supported by other results on the validation of the GOCE models.
Improving the color fidelity of cameras for advanced television systems
NASA Astrophysics Data System (ADS)
Kollarits, Richard V.; Gibbon, David C.
1992-08-01
In this paper we compare the accuracy of the color information obtained from television cameras using three and five wavelength bands. This comparison is based on real digital camera data. The cameras are treated as colorimeters whose characteristics are not linked to that of the display. The color matrices for both cameras were obtained by identical optimization procedures that minimized the color error The color error for the five band camera is 2. 5 times smaller than that obtained from the three band camera. Visual comparison of color matches on a characterized color monitor indicate that the five band camera is capable of color measurements that produce no significant visual error on the display. Because the outputs from the five band camera are reduced to the normal three channels conventionally used for display there need be no increase in signal handling complexity outside the camera. Likewise it is possible to construct a five band camera using only three sensors as in conventional cameras. The principal drawback of the five band camera is the reduction in effective camera sensitivity by about 3/4 of an I stop. 1.
ANN-PSO Integrated Optimization Methodology for Intelligent Control of MMC Machining
NASA Astrophysics Data System (ADS)
Chandrasekaran, Muthumari; Tamang, Santosh
2017-08-01
Metal Matrix Composites (MMC) show improved properties in comparison with non-reinforced alloys and have found increased application in automotive and aerospace industries. The selection of optimum machining parameters to produce components of desired surface roughness is of great concern considering the quality and economy of manufacturing process. In this study, a surface roughness prediction model for turning Al-SiCp MMC is developed using Artificial Neural Network (ANN). Three turning parameters viz., spindle speed ( N), feed rate ( f) and depth of cut ( d) were considered as input neurons and surface roughness was an output neuron. ANN architecture having 3 -5 -1 is found to be optimum and the model predicts with an average percentage error of 7.72 %. Particle Swarm Optimization (PSO) technique is used for optimizing parameters to minimize machining time. The innovative aspect of this work is the development of an integrated ANN-PSO optimization method for intelligent control of MMC machining process applicable to manufacturing industries. The robustness of the method shows its superiority for obtaining optimum cutting parameters satisfying desired surface roughness. The method has better convergent capability with minimum number of iterations.
The global Minmax k-means algorithm.
Wang, Xiaoyan; Bai, Yanping
2016-01-01
The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.
Accuracy analysis and design of A3 parallel spindle head
NASA Astrophysics Data System (ADS)
Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan
2016-03-01
As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.
Fully anisotropic goal-oriented mesh adaptation for 3D steady Euler equations
NASA Astrophysics Data System (ADS)
Loseille, A.; Dervieux, A.; Alauzet, F.
2010-04-01
This paper studies the coupling between anisotropic mesh adaptation and goal-oriented error estimate. The former is very well suited to the control of the interpolation error. It is generally interpreted as a local geometric error estimate. On the contrary, the latter is preferred when studying approximation errors for PDEs. It generally involves non local error contributions. Consequently, a full and strong coupling between both is hard to achieve due to this apparent incompatibility. This paper shows how to achieve this coupling in three steps. First, a new a priori error estimate is proved in a formal framework adapted to goal-oriented mesh adaptation for output functionals. This estimate is based on a careful analysis of the contributions of the implicit error and of the interpolation error. Second, the error estimate is applied to the set of steady compressible Euler equations which are solved by a stabilized Galerkin finite element discretization. A goal-oriented error estimation is derived. It involves the interpolation error of the Euler fluxes weighted by the gradient of the adjoint state associated with the observed functional. Third, rewritten in the continuous mesh framework, the previous estimate is minimized on the set of continuous meshes thanks to a calculus of variations. The optimal continuous mesh is then derived analytically. Thus, it can be used as a metric tensor field to drive the mesh adaptation. From a numerical point of view, this method is completely automatic, intrinsically anisotropic, and does not depend on any a priori choice of variables to perform the adaptation. 3D examples of steady flows around supersonic and transsonic jets are presented to validate the current approach and to demonstrate its efficiency.
NASA Astrophysics Data System (ADS)
Chen, Xin; Xing, Pei; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu; Tian, Qinhua
2017-02-01
A new dataset of surface temperature over North America has been constructed by merging climate model results and empirical tree-ring data through the application of an optimal interpolation algorithm. Errors of both the Community Climate System Model version 4 (CCSM4) simulation and the tree-ring reconstruction were considered to optimize the combination of the two elements. Variance matching was used to reconstruct the surface temperature series. The model simulation provided the background field, and the error covariance matrix was estimated statistically using samples from the simulation results with a running 31-year window for each grid. Thus, the merging process could continue with a time-varying gain matrix. This merging method (MM) was tested using two types of experiment, and the results indicated that the standard deviation of errors was about 0.4 °C lower than the tree-ring reconstructions and about 0.5 °C lower than the model simulation. Because of internal variabilities and uncertainties in the external forcing data, the simulated decadal warm-cool periods were readjusted by the MM such that the decadal variability was more reliable (e.g., the 1940-1960s cooling). During the two centuries (1601-1800 AD) of the preindustrial period, the MM results revealed a compromised spatial pattern of the linear trend of surface temperature, which is in accordance with the phase transition of the Pacific decadal oscillation and Atlantic multidecadal oscillation. Compared with pure CCSM4 simulations, it was demonstrated that the MM brought a significant improvement to the decadal variability of the gridded temperature via the merging of temperature-sensitive tree-ring records.
Multiobjective constraints for climate model parameter choices: Pragmatic Pareto fronts in CESM1
NASA Astrophysics Data System (ADS)
Langenbrunner, B.; Neelin, J. D.
2017-09-01
Global climate models (GCMs) are examples of high-dimensional input-output systems, where model output is a function of many variables, and an update in model physics commonly improves performance in one objective function (i.e., measure of model performance) at the expense of degrading another. Here concepts from multiobjective optimization in the engineering literature are used to investigate parameter sensitivity and optimization in the face of such trade-offs. A metamodeling technique called cut high-dimensional model representation (cut-HDMR) is leveraged in the context of multiobjective optimization to improve GCM simulation of the tropical Pacific climate, focusing on seasonal precipitation, column water vapor, and skin temperature. An evolutionary algorithm is used to solve for Pareto fronts, which are surfaces in objective function space along which trade-offs in GCM performance occur. This approach allows the modeler to visualize trade-offs quickly and identify the physics at play. In some cases, Pareto fronts are small, implying that trade-offs are minimal, optimal parameter value choices are more straightforward, and the GCM is well-functioning. In all cases considered here, the control run was found not to be Pareto-optimal (i.e., not on the front), highlighting an opportunity for model improvement through objectively informed parameter selection. Taylor diagrams illustrate that these improvements occur primarily in field magnitude, not spatial correlation, and they show that specific parameter updates can improve fields fundamental to tropical moist processes—namely precipitation and skin temperature—without significantly impacting others. These results provide an example of how basic elements of multiobjective optimization can facilitate pragmatic GCM tuning processes.
Application of parameter estimation to highly unstable aircraft
NASA Technical Reports Server (NTRS)
Maine, R. E.; Murray, J. E.
1986-01-01
This paper discusses the application of parameter estimation to highly unstable aircraft. It includes a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.
Application of parameter estimation to highly unstable aircraft
NASA Technical Reports Server (NTRS)
Maine, R. E.; Murray, J. E.
1986-01-01
The application of parameter estimation to highly unstable aircraft is discussed. Included are a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.
NASA Astrophysics Data System (ADS)
Pante, Gregor; Knippertz, Peter
2017-04-01
The West African monsoon is the driving element of weather and climate during summer in the Sahel region. It interacts with mesoscale convective systems (MCSs) and the African easterly jet and African easterly waves. Poor representation of convection in numerical models, particularly its organisation on the mesoscale, can result in unrealistic forecasts of the monsoon dynamics. Arguably, the parameterisation of convection is one of the main deficiencies in models over this region. Overall, this has negative impacts on forecasts over West Africa itself but may also affect remote regions, as waves originating from convective heating are badly represented. Here we investigate those remote forecast impacts based on daily initialised 10-day forecasts for July 2016 using the ICON model. One set of simulations employs the default setup of the global model with a horizontal grid spacing of 13 km. It is compared with simulations using the 2-way nesting capability of ICON. A second model domain over West Africa (the nest) with 6.5 km grid spacing is sufficient to explicitly resolve MCSs in this region. In the 2-way nested simulations, the prognostic variables of the global model are influenced by the results of the nest through relaxation. The nest with explicit convection is able to reproduce single MCSs much more realistically compared to the stand-alone global simulation with parameterised convection. Explicit convection leads to cooler temperatures in the lower troposphere (below 500 hPa) over the northern Sahel due to stronger evaporational cooling. Overall, the feedback of dynamic variables from the nest to the global model shows clear positive effects when evaluating the output of the global domain of the 2-way nesting simulation and the output of the stand-alone global model with ERA-Interim re-analyses. Averaged over the 2-way nested region, bias and root mean squared error (RMSE) of temperature, geopotential, wind and relative humidity are significantly reduced in the lower troposphere. Outside Africa over the Atlantic or in Europe the effect of the 2-way nesting becomes visible after some days of simulation. The changes in error measures are not as clear as in the nesting region itself but still improvements for some variables at different altitudes are evident, most likely due to a better representation of African easterly waves and Rossby waves. This work shows the importance of the West African region for global weather forecasts and the potential of convective permitting modelling in this region to improve the forecasts even far away from Africa in the future.
Model reference adaptive control of flexible robots in the presence of sudden load changes
NASA Technical Reports Server (NTRS)
Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory
1991-01-01
Direct command generator tracker based model reference adaptive control (MRAC) algorithms are applied to the dynamics for a flexible-joint arm in the presence of sudden load changes. Because of the need to satisfy a positive real condition, such MRAC procedures are designed so that a feedforward augmented output follows the reference model output, thus, resulting in an ultimately bounded rather than zero output error. Thus, modifications are suggested and tested that: (1) incorporate feedforward into the reference model's output as well as the plant's output, and (2) incorporate a derivative term into only the process feedforward loop. The results of these simulations give a response with zero steady state model following error, and thus encourage further use of MRAC for more complex flexibile robotic systems.
The CARIBU EBIS control and synchronization system
NASA Astrophysics Data System (ADS)
Dickerson, Clayton; Peters, Christopher
2015-01-01
The Californium Rare Isotope Breeder Upgrade (CARIBU) Electron Beam Ion Source (EBIS) charge breeder has been built and tested. The bases of the CARIBU EBIS electrical system are four voltage platforms on which both DC and pulsed high voltage outputs are controlled. The high voltage output pulses are created with either a combination of a function generator and a high voltage amplifier, or two high voltage DC power supplies and a high voltage solid state switch. Proper synchronization of the pulsed voltages, fundamental to optimizing the charge breeding performance, is achieved with triggering from a digital delay pulse generator. The control system is based on National Instruments realtime controllers and LabVIEW software implementing Functional Global Variables (FGV) to store and access instrument parameters. Fiber optic converters enable network communication and triggering across the platforms.
Determining Global Population Distribution: Methods, Applications and Data
Balk, D.L.; Deichmann, U.; Yetman, G.; Pozzi, F.; Hay, S.I.; Nelson, A.
2011-01-01
Evaluating the total numbers of people at risk from infectious disease in the world requires not just tabular population data, but data that are spatially explicit and global in extent at a moderate resolution. This review describes the basic methods for constructing estimates of global population distribution with attention to recent advances in improving both spatial and temporal resolution. To evaluate the optimal resolution for the study of disease, the native resolution of the data inputs as well as that of the resulting outputs are discussed. Assumptions used to produce different population data sets are also described, with their implications for the study of infectious disease. Lastly, the application of these population data sets in studies to assess disease distribution and health impacts is reviewed. The data described in this review are distributed in the accompanying DVD. PMID:16647969
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
Bermejo, Javier; Yotti, Raquel; Pérez del Villar, Candelas; del Álamo, Juan C; Rodríguez-Pérez, Daniel; Martínez-Legazpi, Pablo; Benito, Yolanda; Antoranz, J Carlos; Desco, M Mar; González-Mansilla, Ana; Barrio, Alicia; Elízaga, Jaime; Fernández-Avilés, Francisco
2013-08-15
In cardiovascular research, relaxation and stiffness are calculated from pressure-volume (PV) curves by separately fitting the data during the isovolumic and end-diastolic phases (end-diastolic PV relationship), respectively. This method is limited because it assumes uncoupled active and passive properties during these phases, it penalizes statistical power, and it cannot account for elastic restoring forces. We aimed to improve this analysis by implementing a method based on global optimization of all PV diastolic data. In 1,000 Monte Carlo experiments, the optimization algorithm recovered entered parameters of diastolic properties below and above the equilibrium volume (intraclass correlation coefficients = 0.99). Inotropic modulation experiments in 26 pigs modified passive pressure generated by restoring forces due to changes in the operative and/or equilibrium volumes. Volume overload and coronary microembolization caused incomplete relaxation at end diastole (active pressure > 0.5 mmHg), rendering the end-diastolic PV relationship method ill-posed. In 28 patients undergoing PV cardiac catheterization, the new algorithm reduced the confidence intervals of stiffness parameters by one-fifth. The Jacobian matrix allowed visualizing the contribution of each property to instantaneous diastolic pressure on a per-patient basis. The algorithm allowed estimating stiffness from single-beat PV data (derivative of left ventricular pressure with respect to volume at end-diastolic volume intraclass correlation coefficient = 0.65, error = 0.07 ± 0.24 mmHg/ml). Thus, in clinical and preclinical research, global optimization algorithms provide the most complete, accurate, and reproducible assessment of global left ventricular diastolic chamber properties from PV data. Using global optimization, we were able to fully uncouple relaxation and passive PV curves for the first time in the intact heart.
Image-optimized Coronal Magnetic Field Models
NASA Astrophysics Data System (ADS)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.
2017-08-01
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.
Image-Optimized Coronal Magnetic Field Models
NASA Technical Reports Server (NTRS)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.
2017-01-01
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work we presented early tests of the method which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane, and the effect on the outcome of the optimization of errors in localization of constraints. We find that substantial improvement in the model field can be achieved with this type of constraints, even when magnetic features in the images are located outside of the image plane.
NASA Astrophysics Data System (ADS)
Li, L. L.; Jin, C. L.; Ge, X.
2018-01-01
In this paper, the output regulation problem with dissipative property for a class of switched stochastic delay systems is investigated, based on an error-dependent switching law. Under the assumption that none subsystem is solvable for the problem, a sufficient condition is derived by structuring multiple Lyapunov-Krasovskii functionals with respect to multiple supply rates, via designing error feedback regulators. The condition is also established when dissipative property reduces to passive property. Finally, two numerical examples are given to demonstrate the feasibility and efficiency of the present method.
Low speed phaselock speed control system. [for brushless dc motor
NASA Technical Reports Server (NTRS)
Fulcher, R. W.; Sudey, J. (Inventor)
1975-01-01
A motor speed control system for an electronically commutated brushless dc motor is provided which includes a phaselock loop with bidirectional torque control for locking the frequency output of a high density encoder, responsive to actual speed conditions, to a reference frequency signal, corresponding to the desired speed. The system includes a phase comparator, which produces an output in accordance with the difference in phase between the reference and encoder frequency signals, and an integrator-digital-to-analog converter unit, which converts the comparator output into an analog error signal voltage. Compensation circuitry, including a biasing means, is provided to convert the analog error signal voltage to a bidirectional error signal voltage which is utilized by an absolute value amplifier, rotational decoder, power amplifier-commutators, and an arrangement of commutation circuitry.
NASA Astrophysics Data System (ADS)
Song, Xingliang; Sha, Pengfei; Fan, Yuanyuan; Jiang, R.; Zhao, Jiangshan; Zhou, Yi; Yang, Junhong; Xiong, Guangliang; Wang, Yu
2018-02-01
Due to complex kinetics of formation and loss mechanisms, such as ion-ion recombination reaction, neutral species harpoon reaction, excited state quenching and photon absorption, as well as their interactions, the performance behavior of different laser gas medium parameters for excimer laser varies greatly. Therefore, the effects of gas composition and total gas pressure on excimer laser performance attract continual research studies. In this work, orthogonal experimental design (OED) is used to investigate quantitative and qualitative correlations between output laser energy characteristics and gas medium parameters for an ArF excimer laser with plano-plano optical resonator operation. Optimized output laser energy with good pulse to pulse stability can be obtained effectively by proper selection of the gas medium parameters, which makes the most of the ArF excimer laser device. Simple and efficient method for gas medium optimization is proposed and demonstrated experimentally, which provides a global and systematic solution. By detailed statistical analysis, the significance sequence of relevant parameter factors and the optimized composition for gas medium parameters are obtained. Compared with conventional route of varying single gas parameter factor sequentially, this paper presents a more comprehensive way of considering multivariables simultaneously, which seems promising in striking an appropriate balance among various complicated parameters for power scaling study of an excimer laser.
The impacts of observing flawed and flawless demonstrations on clinical skill learning.
Domuracki, Kurt; Wong, Arthur; Olivieri, Lori; Grierson, Lawrence E M
2015-02-01
Clinical skills expertise can be advanced through accessible and cost-effective video-based observational practice activities. Previous findings suggest that the observation of performances of skills that include flaws can be beneficial to trainees. Observing the scope of variability within a skilled movement allows learners to develop strategies to manage the potential for and consequences associated with errors. This study tests this observational learning approach on the development of the skills of central line insertion (CLI). Medical trainees with no CLI experience (n = 39) were randomised to three observational practice groups: a group which viewed and assessed videos of an expert performing a CLI without any errors (F); a group which viewed and assessed videos that contained a mix of flawless and errorful performances (E), and a group which viewed the same videos as the E group but were also given information concerning the correctness of their assessments (FA). All participants interacted with their observational videos each day for 4 days. Following this period, participants returned to the laboratory and performed a simulation-based insertion, which was assessed using a standard checklist and a global rating scale for the skill. These ratings served as the dependent measures for analysis. The checklist analysis revealed no differences between observational learning groups (grand mean ± standard error: [20.3 ± 0.7]/25). However, the global rating analysis revealed a main effect of group (d.f.2,36 = 4.51, p = 0.018), which describes better CLI performance in the FA group, compared with the F and E groups. Observational practice that includes errors improves the global performance aspects of clinical skill learning as long as learners are given confirmation that what they are observing is errorful. These findings provide a refined perspective on the optimal organisation of skill education programmes that combine physical and observational practice activities. © 2015 John Wiley & Sons Ltd.
Robust THP Transceiver Designs for Multiuser MIMO Downlink with Imperfect CSIT
NASA Astrophysics Data System (ADS)
Ubaidulla, P.; Chockalingam, A.
2009-12-01
We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i) minimum SMSE, (ii) MSE-constrained, and (iii) MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.
Shahsavari, Shadab; Rezaie Shirmard, Leila; Amini, Mohsen; Abedin Dokoosh, Farid
2017-01-01
Formulation of a nanoparticulate Fingolimod delivery system based on biodegradable poly(3-hydroxybutyrate-co-3-hydroxyvalerate) was optimized according to artificial neural networks (ANNs). Concentration of poly(3-hydroxybutyrate-co-3-hydroxyvalerate), PVA and amount of Fingolimod is considered as the input value, and the particle size, polydispersity index, loading capacity, and entrapment efficacy as output data in experimental design study. In vitro release study was carried out for best formulation according to statistical analysis. ANNs are employed to generate the best model to determine the relationships between various values. In order to specify the model with the best accuracy and proficiency for the in vitro release, a multilayer percepteron with different training algorithm has been examined. Three training model formulations including Levenberg-Marquardt (LM), gradient descent, and Bayesian regularization were employed for training the ANN models. It is demonstrated that the predictive ability of each training algorithm is in the order of LM > gradient descent > Bayesian regularization. Also, optimum formulation was achieved by LM training function with 15 hidden layers and 20 neurons. The transfer function of the hidden layer for this formulation and the output layer were tansig and purlin, respectively. Also, the optimization process was developed by minimizing the error among the predicted and observed values of training algorithm (about 0.0341). Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Yilmaz, Banu; Aras, Egemen; Nacar, Sinan; Kankal, Murat
2018-05-23
The functional life of a dam is often determined by the rate of sediment delivery to its reservoir. Therefore, an accurate estimate of the sediment load in rivers with dams is essential for designing and predicting a dam's useful lifespan. The most credible method is direct measurements of sediment input, but this can be very costly and it cannot always be implemented at all gauging stations. In this study, we tested various regression models to estimate suspended sediment load (SSL) at two gauging stations on the Çoruh River in Turkey, including artificial bee colony (ABC), teaching-learning-based optimization algorithm (TLBO), and multivariate adaptive regression splines (MARS). These models were also compared with one another and with classical regression analyses (CRA). Streamflow values and previously collected data of SSL were used as model inputs with predicted SSL data as output. Two different training and testing dataset configurations were used to reinforce the model accuracy. For the MARS method, the root mean square error value was found to range between 35% and 39% for the test two gauging stations, which was lower than errors for other models. Error values were even lower (7% to 15%) using another dataset. Our results indicate that simultaneous measurements of streamflow with SSL provide the most effective parameter for obtaining accurate predictive models and that MARS is the most accurate model for predicting SSL. Copyright © 2017 Elsevier B.V. All rights reserved.
Photocurrent Measurement of PC and PV HgCdTe Detectors
Eppeldauer, George P.; Martin, Robert J.
2001-01-01
Novel preamplifiers for working standard photoconductive (PC) and photovoltaic (PV) HgCdTe detectors have been developed to maintain the spectral responsivity scale of the National Institute of Standards and Technology (NIST) in the wavelength range of 5 μm to 20 μm. The linear PC mode preamplifier does not need any compensating source to zero the effect of the detector bias current for the preamplifier output. The impedance multiplication concept with a positive feedback buffer amplifier was analyzed and utilized in a bootstrap PV transimpedance amplifier to measure photocurrent of a 200 Ω shunt resistance photodiode with a maximum signal gain of 108 V/A. In spite of the high performance lock-in used as a second-stage signal-amplifier, the signal-to-noise ratio had to be optimized for the output of the photocurrent preamplifiers. Noise and drift were equalized for the output of the PV mode preamplifier. The signal gain errors were calculated to determine the signal frequency range where photocurrent-to-voltage conversion can be performed with very low uncertainties. For the design of both PC and PV detector preamplifiers, the most important gain equations are described. Measurement results on signal ranges and noise performance are discussed. PMID:27500036
Photocurrent Measurement of PC and PV HgCdTe Detectors.
Eppeldauer, G P; Martin, R J
2001-01-01
Novel preamplifiers for working standard photoconductive (PC) and photovoltaic (PV) HgCdTe detectors have been developed to maintain the spectral responsivity scale of the National Institute of Standards and Technology (NIST) in the wavelength range of 5 μm to 20 μm. The linear PC mode preamplifier does not need any compensating source to zero the effect of the detector bias current for the preamplifier output. The impedance multiplication concept with a positive feedback buffer amplifier was analyzed and utilized in a bootstrap PV transimpedance amplifier to measure photocurrent of a 200 Ω shunt resistance photodiode with a maximum signal gain of 10(8) V/A. In spite of the high performance lock-in used as a second-stage signal-amplifier, the signal-to-noise ratio had to be optimized for the output of the photocurrent preamplifiers. Noise and drift were equalized for the output of the PV mode preamplifier. The signal gain errors were calculated to determine the signal frequency range where photocurrent-to-voltage conversion can be performed with very low uncertainties. For the design of both PC and PV detector preamplifiers, the most important gain equations are described. Measurement results on signal ranges and noise performance are discussed.
NASA Astrophysics Data System (ADS)
Guadagnini, A.; Riva, M.; Dell'Oca, A.
2017-12-01
We propose to ground sensitivity of uncertain parameters of environmental models on a set of indices based on the main (statistical) moments, i.e., mean, variance, skewness and kurtosis, of the probability density function (pdf) of a target model output. This enables us to perform Global Sensitivity Analysis (GSA) of a model in terms of multiple statistical moments and yields a quantification of the impact of model parameters on features driving the shape of the pdf of model output. Our GSA approach includes the possibility of being coupled with the construction of a reduced complexity model that allows approximating the full model response at a reduced computational cost. We demonstrate our approach through a variety of test cases. These include a commonly used analytical benchmark, a simplified model representing pumping in a coastal aquifer, a laboratory-scale tracer experiment, and the migration of fracturing fluid through a naturally fractured reservoir (source) to reach an overlying formation (target). Our strategy allows discriminating the relative importance of model parameters to the four statistical moments considered. We also provide an appraisal of the error associated with the evaluation of our sensitivity metrics by replacing the original system model through the selected surrogate model. Our results suggest that one might need to construct a surrogate model with increasing level of accuracy depending on the statistical moment considered in the GSA. The methodological framework we propose can assist the development of analysis techniques targeted to model calibration, design of experiment, uncertainty quantification and risk assessment.
Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.
2002-01-01
Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.
On codes with multi-level error-correction capabilities
NASA Technical Reports Server (NTRS)
Lin, Shu
1987-01-01
In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.
Statistical analysis of global horizontal solar irradiation GHI in Fez city, Morocco
NASA Astrophysics Data System (ADS)
Bounoua, Z.; Mechaqrane, A.
2018-05-01
An accurate knowledge of the solar energy reaching the ground is necessary for sizing and optimizing the performances of solar installations. This paper describes a statistical analysis of the global horizontal solar irradiation (GHI) at Fez city, Morocco. For better reliability, we have first applied a set of check procedures to test the quality of hourly GHI measurements. We then eliminate the erroneous values which are generally due to measurement or the cosine effect errors. Statistical analysis show that the annual mean daily values of GHI is of approximately 5 kWh/m²/day. Daily monthly mean values and other parameter are also calculated.
Systems and methods for compensating for electrical converter nonlinearities
Perisic, Milun; Ransom, Ray M.; Kajouke, Lateef A.
2013-06-18
Systems and methods are provided for delivering energy from an input interface to an output interface. An electrical system includes an input interface, an output interface, an energy conversion module coupled between the input interface and the output interface, and a control module. The control module determines a duty cycle control value for operating the energy conversion module to produce a desired voltage at the output interface. The control module determines an input power error at the input interface and adjusts the duty cycle control value in a manner that is influenced by the input power error, resulting in a compensated duty cycle control value. The control module operates switching elements of the energy conversion module to deliver energy to the output interface with a duty cycle that is influenced by the compensated duty cycle control value.
NASA Astrophysics Data System (ADS)
Zhang, Jiancheng; Zhu, Fanglai
2018-03-01
In this paper, the output consensus of a class of linear heterogeneous multi-agent systems with unmatched disturbances is considered. Firstly, based on the relative output information among neighboring agents, we propose an asymptotic sliding-mode based consensus control scheme, under which, the output consensus error can converge to zero by removing the disturbances from output channels. Secondly, in order to reach the consensus goal, we design a novel high-order unknown input observer for each agent. It can estimate not only each agent's states and disturbances, but also the disturbances' high-order derivatives, which are required in the control scheme aforementioned above. The observer-based consensus control laws and the convergence analysis of the consensus error dynamics are given. Finally, a simulation example is provided to verify the validity of our methods.
A open loop guidance architecture for navigationally robust on-orbit docking
NASA Technical Reports Server (NTRS)
Chern, Hung-Sheng
1995-01-01
The development of an open-hop guidance architecture is outlined for autonomous rendezvous and docking (AR&D) missions to determine whether the Global Positioning System (GPS) can be used in place of optical sensors for relative initial position determination of the chase vehicle. Feasible command trajectories for one, two, and three impulse AR&D maneuvers are determined using constrained trajectory optimization. Early AR&D command trajectory results suggest that docking accuracies are most sensitive to vertical position errors at the initial conduction of the chase vehicle. Thus, a feasible command trajectory is based on maximizing the size of the locus of initial vertical positions for which a fixed sequence of impulses will translate the chase vehicle into the target while satisfying docking accuracy requirements. Documented accuracies are used to determine whether relative GPS can achieve the vertical position error requirements of the impulsive command trajectories. Preliminary development of a thruster management system for the Cargo Transfer Vehicle (CTV) based on optimal throttle settings is presented to complete the guidance architecture. Results show that a guidance architecture based on a two impulse maneuvers generated the best performance in terms of initial position error and total velocity change for the chase vehicle.
Quadratic correlation filters for optical correlators
NASA Astrophysics Data System (ADS)
Mahalanobis, Abhijit; Muise, Robert R.; Vijaya Kumar, Bhagavatula V. K.
2003-08-01
Linear correlation filters have been implemented in optical correlators and successfully used for a variety of applications. The output of an optical correlator is usually sensed using a square law device (such as a CCD array) which forces the output to be the squared magnitude of the desired correlation. It is however not a traditional practice to factor the effect of the square-law detector in the design of the linear correlation filters. In fact, the input-output relationship of an optical correlator is more accurately modeled as a quadratic operation than a linear operation. Quadratic correlation filters (QCFs) operate directly on the image data without the need for feature extraction or segmentation. In this sense, the QCFs retain the main advantages of conventional linear correlation filters while offering significant improvements in other respects. Not only is more processing required to detect peaks in the outputs of multiple linear filters, but choosing a winner among them is an error prone task. In contrast, all channels in a QCF work together to optimize the same performance metric and produce a combined output that leads to considerable simplification of the post-processing. In this paper, we propose a novel approach to the design of quadratic correlation based on the Fukunaga Koontz transform. Although quadratic filters are known to be optimum when the data is Gaussian, it is expected that they will perform as well as or better than linear filters in general. Preliminary performance results are provided that show that quadratic correlation filters perform better than their linear counterparts.
ERIC Educational Resources Information Center
Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.
2011-01-01
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…
Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F
2016-08-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
A Practical, Hardware Friendly MMSE Detector for MIMO-OFDM-Based Systems
NASA Astrophysics Data System (ADS)
Kim, Hun Seok; Zhu, Weijun; Bhatia, Jatin; Mohammed, Karim; Shah, Anish; Daneshrad, Babak
2008-12-01
Design and implementation of a highly optimized MIMO (multiple-input multiple-output) detector requires cooptimization of the algorithm with the underlying hardware architecture. Special attention must be paid to application requirements such as throughput, latency, and resource constraints. In this work, we focus on a highly optimized matrix inversion free [InlineEquation not available: see fulltext.] MMSE (minimum mean square error) MIMO detector implementation. The work has resulted in a real-time field-programmable gate array-based implementation (FPGA-) on a Xilinx Virtex-2 6000 using only 9003 logic slices, 66 multipliers, and 24 Block RAMs (less than 33% of the overall resources of this part). The design delivers over 420 Mbps sustained throughput with a small 2.77-microsecond latency. The designed [InlineEquation not available: see fulltext.] linear MMSE MIMO detector is capable of complying with the proposed IEEE 802.11n standard.
Complete Tri-Axis Magnetometer Calibration with a Gyro Auxiliary
Yang, Deng; You, Zheng; Li, Bin; Duan, Wenrui; Yuan, Binwen
2017-01-01
Magnetometers combined with inertial sensors are widely used for orientation estimation, and calibrations are necessary to achieve high accuracy. This paper presents a complete tri-axis magnetometer calibration algorithm with a gyro auxiliary. The magnetic distortions and sensor errors, including the misalignment error between the magnetometer and assembled platform, are compensated after calibration. With the gyro auxiliary, the magnetometer linear interpolation outputs are calculated, and the error parameters are evaluated under linear operations of magnetometer interpolation outputs. The simulation and experiment are performed to illustrate the efficiency of the algorithm. After calibration, the heading errors calculated by magnetometers are reduced to 0.5° (1σ). This calibration algorithm can also be applied to tri-axis accelerometers whose error model is similar to tri-axis magnetometers. PMID:28587115
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0
Zhang, Xi; Miao, Lingjuan; Shao, Haijun
2016-01-01
If a Kalman Filter (KF) is applied to Global Positioning System (GPS) baseband signal preprocessing, the estimates of signal phase and frequency can have low variance, even in highly dynamic situations. This paper presents a novel preprocessing scheme based on a dual-filter structure. Compared with the traditional model utilizing a single KF, this structure avoids carrier tracking being subjected to code tracking errors. Meanwhile, as the loop filters are completely removed, state feedback values are adopted to generate local carrier and code. Although local carrier frequency has a wide fluctuation, the accuracy of Doppler shift estimation is improved. In the ultra-tight GPS/Inertial Navigation System (INS) integration, the carrier frequency derived from the external navigation information is not viewed as the local carrier frequency directly. That facilitates retaining the design principle of state feedback. However, under harsh conditions, the GPS outputs may still bear large errors which can destroy the estimation of INS errors. Thus, an innovative integrated navigation filter is constructed by modeling the non-negligible errors in the estimated Doppler shifts, to ensure INS is properly calibrated. Finally, field test and semi-physical simulation based on telemetered missile trajectory validate the effectiveness of methods proposed in this paper. PMID:27144570
Zhang, Xi; Miao, Lingjuan; Shao, Haijun
2016-05-02
If a Kalman Filter (KF) is applied to Global Positioning System (GPS) baseband signal preprocessing, the estimates of signal phase and frequency can have low variance, even in highly dynamic situations. This paper presents a novel preprocessing scheme based on a dual-filter structure. Compared with the traditional model utilizing a single KF, this structure avoids carrier tracking being subjected to code tracking errors. Meanwhile, as the loop filters are completely removed, state feedback values are adopted to generate local carrier and code. Although local carrier frequency has a wide fluctuation, the accuracy of Doppler shift estimation is improved. In the ultra-tight GPS/Inertial Navigation System (INS) integration, the carrier frequency derived from the external navigation information is not viewed as the local carrier frequency directly. That facilitates retaining the design principle of state feedback. However, under harsh conditions, the GPS outputs may still bear large errors which can destroy the estimation of INS errors. Thus, an innovative integrated navigation filter is constructed by modeling the non-negligible errors in the estimated Doppler shifts, to ensure INS is properly calibrated. Finally, field test and semi-physical simulation based on telemetered missile trajectory validate the effectiveness of methods proposed in this paper.
NASA Technical Reports Server (NTRS)
Thakoor, Anil
1990-01-01
Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.
Full-chip level MEEF analysis using model based lithography verification
NASA Astrophysics Data System (ADS)
Kim, Juhwan; Wang, Lantian; Zhang, Daniel; Tang, Zongwu
2005-11-01
MEEF (Mask Error Enhancement Factor) has become a critical factor in CD uniformity control since optical lithography process moved to sub-resolution era. A lot of studies have been done by quantifying the impact of the mask CD (Critical Dimension) errors on the wafer CD errors1-2. However, the benefits from those studies were restricted only to small pattern areas of the full-chip data due to long simulation time. As fast turn around time can be achieved for the complicated verifications on very large data by linearly scalable distributed processing technology, model-based lithography verification becomes feasible for various types of applications such as post mask synthesis data sign off for mask tape out in production and lithography process development with full-chip data3,4,5. In this study, we introduced two useful methodologies for the full-chip level verification of mask error impact on wafer lithography patterning process. One methodology is to check MEEF distribution in addition to CD distribution through process window, which can be used for RET/OPC optimization at R&D stage. The other is to check mask error sensitivity on potential pinch and bridge hotspots through lithography process variation, where the outputs can be passed on to Mask CD metrology to add CD measurements on those hotspot locations. Two different OPC data were compared using the two methodologies in this study.
Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition
NASA Technical Reports Server (NTRS)
Zheng, Jason Xin; Nguyen, Kayla; He, Yutao
2010-01-01
Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.
NASA Astrophysics Data System (ADS)
Yu, Jiang-Bo; Zhao, Yan; Wu, Yu-Qiang
2014-04-01
This article considers the global robust output regulation problem via output feedback for a class of cascaded nonlinear systems with input-to-state stable inverse dynamics. The system uncertainties depend not only on the measured output but also all the unmeasurable states. By introducing an internal model, the output regulation problem is converted into a stabilisation problem for an appropriately augmented system. The designed dynamic controller could achieve the global asymptotic tracking control for a class of time-varying reference signals for the system output while keeping all other closed-loop signals bounded. It is of interest to note that the developed control approach can be applied to the speed tracking control of the fan speed control system. The simulation results demonstrate its effectiveness.
Improvement of short-term numerical wind predictions
NASA Astrophysics Data System (ADS)
Bedard, Joel
Geophysic Model Output Statistics (GMOS) are developed to optimize the use of NWP for complex sites. GMOS differs from other MOS that are widely used by meteorological centers in the following aspects: it takes into account the surrounding geophysical parameters such as surface roughness, terrain height, etc., along with wind direction; it can be directly applied without any training, although training will further improve the results. The GMOS was applied to improve the Environment Canada GEM-LAM 2.5km forecasts at North Cape (PEI, Canada): It improves the predictions RMSE by 25-30% for all time horizons and almost all meteorological conditions; the topographic signature of the forecast error due to insufficient grid refinement is eliminated and the NWP combined with GMOS outperform the persistence from a 2h horizon, instead of 4h without GMOS. Finally, GMOS was applied at another site (Bouctouche, NB, Canada): similar improvements were observed, thus showing its general applicability. Keywords: wind energy, wind power forecast, numerical weather prediction, complex sites, model output statistics
Improving ECG Classification Accuracy Using an Ensemble of Neural Network Modules
Javadi, Mehrdad; Ebrahimpour, Reza; Sajedin, Atena; Faridi, Soheil; Zakernejad, Shokoufeh
2011-01-01
This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization. PMID:22046232
NASA Astrophysics Data System (ADS)
Liu, Dong; Miller, Ian; Hostetler, Chris; Cook, Anthony; Hair, Johnathan
2011-06-01
High spectral resolution lidars (HSRLs) have recently shown great value in aerosol measurements form aircraft and are being called for in future space-based aerosol remote sensing applications. A quasi-monolithic field-widened, off-axis Michelson interferometer had been developed as the spectral discrimination filter for an HSRL currently under development at NASA Langley Research Center (LaRC). The Michelson filter consists of a cubic beam splitter, a solid arm and an air arm. The input light is injected at 1.5° off-axis to provide two output channels: standard Michelson output and the reflected complementary signal. Piezo packs connect the air arm mirror to the main part of the filter that allows it to be tuned within a small range. In this paper, analyses of the throughput wavephase, locking error, AR coating, and tilt angle of the interferometer are described. The transmission ratio for monochromatic light at the transmitted wavelength is used as a figure of merit for assessing each of these parameters.
Comparisons between data assimilated HYCOM output and in situ Argo measurements in the Bay of Bengal
NASA Astrophysics Data System (ADS)
Wilson, E. A.; Riser, S.
2014-12-01
This study evaluates the performance of data assimilated Hybrid Coordinate Ocean Model (HYCOM) output for the Bay of Bengal from September 2008 through July 2013. We find that while HYCOM assimilates Argo data, the model still suffers from significant temperature and salinity biases in this region. These biases are most severe in the northern Bay of Bengal, where the model tends to be too saline near the surface and too fresh at depth. The maximum magnitude of these biases is approximately 0.6 PSS. We also find that the model's salinity biases have a distinct seasonal cycle. The most problematic periods are the months following the summer monsoon (Oct-Jan). HYCOM's near surface temperature estimates compare more favorably with Argo, but significant errors exist at deeper levels. We argue that optimal interpolation will tend to induce positive salinity biases in the northern regions of the Bay. Further, we speculate that these biases are introduced when the model relaxes to climatology and assimilates real-time data.
Yu, Chanki; Lee, Sang Wook
2016-05-20
We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.
NASA Astrophysics Data System (ADS)
Zhao, Dan; Wang, Xiaoman; Cheng, Yuan; Liu, Shaogang; Wu, Yanhong; Chai, Liqin; Liu, Yang; Cheng, Qianju
2018-05-01
Piecewise-linear structure can effectively broaden the working frequency band of the piezoelectric energy harvester, and improvement of its research can promote the practical process of energy collection device to meet the requirements for powering microelectronic components. In this paper, the incremental harmonic balance (IHB) method is introduced for the complicated and difficult analysis process of the piezoelectric energy harvester to solve these problems. After obtaining the nonlinear dynamic equation of the single-degree-of-freedom piecewise-linear energy harvester by mathematical modeling and the equation is solved based on the IHB method, the theoretical amplitude-frequency curve of open-circuit voltage is achieved. Under 0.2 g harmonic excitation, a piecewise-linear energy harvester is experimentally tested by unidirectional frequency-increasing scanning. The results demonstrate that the theoretical and experimental amplitudes have the same trend, and the width of the working band with high voltage output are 4.9 Hz and 4.7 Hz, respectively, and the relative error is 4.08%. The open-output peak voltage are 21.53 V and 18.25 V, respectively, and the relative error is 15.23%. Since the theoretical value is consistent with the experimental results, the theoretical model and the incremental harmonic balance method used in this paper are suitable for solving single-degree-of-freedom piecewise-linear piezoelectric energy harvester and can be applied to further parameter optimized design.
Application of artificial neural networks to chemostratigraphy
NASA Astrophysics Data System (ADS)
Malmgren, BjöRn A.; Nordlund, Ulf
1996-08-01
Artificial neural networks, a branch of artificial intelligence, are computer systems formed by a number of simple, highly interconnected processing units that have the ability to learn a set of target vectors from a set of associated input signals. Neural networks learn by self-adjusting a set of parameters, using some pertinent algorithm to minimize the error between the desired output and network output. We explore the potential of this approach in solving a problem involving classification of geochemical data. The data, taken from the literature, are derived from four late Quaternary zones of volcanic ash of basaltic and rhyolithic origin from the Norwegian Sea. These ash layers span the oxygen isotope zones 1, 5, 7, and 11, respectively (last 420,000 years). The data consist of nine geochemical variables (oxides) determined in each of 183 samples. We employed a three-layer back propagation neural network to assess its efficiency to optimally differentiate samples from the four ash zones on the basis of their geochemical composition. For comparison, three statistical pattern recognition techniques, linear discriminant analysis, the k-nearest neighbor (k-NN) technique, and SIMCA (soft independent modeling of class analogy), were applied to the same data. All of these showed considerably higher error rates than the artificial neural network, indicating that the back propagation network was indeed more powerful in correctly classifying the ash particles to the appropriate zone on the basis of their geochemical composition.
Improve homology search sensitivity of PacBio data by correcting frameshifts.
Du, Nan; Sun, Yanni
2016-09-01
Single-molecule, real-time sequencing (SMRT) developed by Pacific BioSciences produces longer reads than secondary generation sequencing technologies such as Illumina. The long read length enables PacBio sequencing to close gaps in genome assembly, reveal structural variations, and identify gene isoforms with higher accuracy in transcriptomic sequencing. However, PacBio data has high sequencing error rate and most of the errors are insertion or deletion errors. During alignment-based homology search, insertion or deletion errors in genes will cause frameshifts and may only lead to marginal alignment scores and short alignments. As a result, it is hard to distinguish true alignments from random alignments and the ambiguity will incur errors in structural and functional annotation. Existing frameshift correction tools are designed for data with much lower error rate and are not optimized for PacBio data. As an increasing number of groups are using SMRT, there is an urgent need for dedicated homology search tools for PacBio data. In this work, we introduce Frame-Pro, a profile homology search tool for PacBio reads. Our tool corrects sequencing errors and also outputs the profile alignments of the corrected sequences against characterized protein families. We applied our tool to both simulated and real PacBio data. The results showed that our method enables more sensitive homology search, especially for PacBio data sets of low sequencing coverage. In addition, we can correct more errors when comparing with a popular error correction tool that does not rely on hybrid sequencing. The source code is freely available at https://sourceforge.net/projects/frame-pro/ yannisun@msu.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Short-Term Global Horizontal Irradiance Forecasting Based on Sky Imaging and Pattern Recognition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Brian S; Feng, Cong; Cui, Mingjian
Accurate short-term forecasting is crucial for solar integration in the power grid. In this paper, a classification forecasting framework based on pattern recognition is developed for 1-hour-ahead global horizontal irradiance (GHI) forecasting. Three sets of models in the forecasting framework are trained by the data partitioned from the preprocessing analysis. The first two sets of models forecast GHI for the first four daylight hours of each day. Then the GHI values in the remaining hours are forecasted by an optimal machine learning model determined based on a weather pattern classification model in the third model set. The weather pattern ismore » determined by a support vector machine (SVM) classifier. The developed framework is validated by the GHI and sky imaging data from the National Renewable Energy Laboratory (NREL). Results show that the developed short-term forecasting framework outperforms the persistence benchmark by 16% in terms of the normalized mean absolute error and 25% in terms of the normalized root mean square error.« less
Luo, Yong; Wu, Wenqi; Babu, Ravindra; Tang, Kanghua; Luo, Bing
2012-01-01
COMPASS is an indigenously developed Chinese global navigation satellite system and will share many features in common with GPS (Global Positioning System). Since the ultra-tight GPS/INS (Inertial Navigation System) integration shows its advantage over independent GPS receivers in many scenarios, the federated ultra-tight COMPASS/INS integration has been investigated in this paper, particularly, by proposing a simplified prefilter model. Compared with a traditional prefilter model, the state space of this simplified system contains only carrier phase, carrier frequency and carrier frequency rate tracking errors. A two-quadrant arctangent discriminator output is used as a measurement. Since the code tracking error related parameters were excluded from the state space of traditional prefilter models, the code/carrier divergence would destroy the carrier tracking process, and therefore an adaptive Kalman filter algorithm tuning process noise covariance matrix based on state correction sequence was incorporated to compensate for the divergence. The federated ultra-tight COMPASS/INS integration was implemented with a hardware COMPASS intermediate frequency (IF), and INS's accelerometers and gyroscopes signal sampling system. Field and simulation test results showed almost similar tracking and navigation performances for both the traditional prefilter model and the proposed system; however, the latter largely decreased the computational load. PMID:23012564
Electroencephalography epilepsy classifications using hybrid cuckoo search and neural network
NASA Astrophysics Data System (ADS)
Pratiwi, A. B.; Damayanti, A.; Miswanto
2017-07-01
Epilepsy is a condition that affects the brain and causes repeated seizures. This seizure is episodes that can vary and nearly undetectable to long periods of vigorous shaking or brain contractions. Epilepsy often can be confirmed with an electrocephalography (EEG). Neural Networks has been used in biomedic signal analysis, it has successfully classified the biomedic signal, such as EEG signal. In this paper, a hybrid cuckoo search and neural network are used to recognize EEG signal for epilepsy classifications. The weight of the multilayer perceptron is optimized by the cuckoo search algorithm based on its error. The aim of this methods is making the network faster to obtained the local or global optimal then the process of classification become more accurate. Based on the comparison results with the traditional multilayer perceptron, the hybrid cuckoo search and multilayer perceptron provides better performance in term of error convergence and accuracy. The purpose methods give MSE 0.001 and accuracy 90.0 %.
Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende
2014-01-01
Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.
Guziolowski, Carito; Videla, Santiago; Eduati, Federica; Thiele, Sven; Cokelaer, Thomas; Siegel, Anne; Saez-Rodriguez, Julio
2013-09-15
Logic modeling is a useful tool to study signal transduction across multiple pathways. Logic models can be generated by training a network containing the prior knowledge to phospho-proteomics data. The training can be performed using stochastic optimization procedures, but these are unable to guarantee a global optima or to report the complete family of feasible models. This, however, is essential to provide precise insight in the mechanisms underlaying signal transduction and generate reliable predictions. We propose the use of Answer Set Programming to explore exhaustively the space of feasible logic models. Toward this end, we have developed caspo, an open-source Python package that provides a powerful platform to learn and characterize logic models by leveraging the rich modeling language and solving technologies of Answer Set Programming. We illustrate the usefulness of caspo by revisiting a model of pro-growth and inflammatory pathways in liver cells. We show that, if experimental error is taken into account, there are thousands (11 700) of models compatible with the data. Despite the large number, we can extract structural features from the models, such as links that are always (or never) present or modules that appear in a mutual exclusive fashion. To further characterize this family of models, we investigate the input-output behavior of the models. We find 91 behaviors across the 11 700 models and we suggest new experiments to discriminate among them. Our results underscore the importance of characterizing in a global and exhaustive manner the family of feasible models, with important implications for experimental design. caspo is freely available for download (license GPLv3) and as a web service at http://caspo.genouest.org/. Supplementary materials are available at Bioinformatics online. santiago.videla@irisa.fr.
Guziolowski, Carito; Videla, Santiago; Eduati, Federica; Thiele, Sven; Cokelaer, Thomas; Siegel, Anne; Saez-Rodriguez, Julio
2013-01-01
Motivation: Logic modeling is a useful tool to study signal transduction across multiple pathways. Logic models can be generated by training a network containing the prior knowledge to phospho-proteomics data. The training can be performed using stochastic optimization procedures, but these are unable to guarantee a global optima or to report the complete family of feasible models. This, however, is essential to provide precise insight in the mechanisms underlaying signal transduction and generate reliable predictions. Results: We propose the use of Answer Set Programming to explore exhaustively the space of feasible logic models. Toward this end, we have developed caspo, an open-source Python package that provides a powerful platform to learn and characterize logic models by leveraging the rich modeling language and solving technologies of Answer Set Programming. We illustrate the usefulness of caspo by revisiting a model of pro-growth and inflammatory pathways in liver cells. We show that, if experimental error is taken into account, there are thousands (11 700) of models compatible with the data. Despite the large number, we can extract structural features from the models, such as links that are always (or never) present or modules that appear in a mutual exclusive fashion. To further characterize this family of models, we investigate the input–output behavior of the models. We find 91 behaviors across the 11 700 models and we suggest new experiments to discriminate among them. Our results underscore the importance of characterizing in a global and exhaustive manner the family of feasible models, with important implications for experimental design. Availability: caspo is freely available for download (license GPLv3) and as a web service at http://caspo.genouest.org/. Supplementary information: Supplementary materials are available at Bioinformatics online. Contact: santiago.videla@irisa.fr PMID:23853063
Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas
2012-08-01
In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.
Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory
NASA Astrophysics Data System (ADS)
Yan, Daqin; Wang, Fuzhong; Wang, Shuo
2017-12-01
Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.
Design of a microbial contamination detector and analysis of error sources in its optical path.
Zhang, Chao; Yu, Xiang; Liu, Xingju; Zhang, Lei
2014-05-01
Microbial contamination is a growing concern in the food safety today. To effectively control the types and degree of microbial contamination during food production, this paper introduces a design for a microbial contamination detector that can be used for quick in-situ examination. The designed detector can identify the category of microbial contamination by locating its characteristic absorption peak and then can calculate the concentration of the microbial contamination by fitting the absorbance vs. concentration lines of standard samples with gradient concentrations. Based on traditional scanning grating detection system, this design improves the light splitting unit to expand the scanning range and enhance the accuracy of output wavelength. The motor rotation angle φ is designed to have a linear relationship with the output wavelength angle λ, which simplifies the conversion of output spectral curves into wavelength vs. light intensity curves. In this study, we also derive the relationship between the device's major sources of errors and cumulative error of the output wavelengths, and suggest a simple correction for these errors. The proposed design was applied to test pigments and volatile basic nitrogen (VBN) which evaluated microbial contamination degrees of meats, and the deviations between the measured values and the pre-set values were only in a low range of 1.15% - 1.27%.
Air-braked cycle ergometers: validity of the correction factor for barometric pressure.
Finn, J P; Maxwell, B F; Withers, R T
2000-10-01
Barometric pressure exerts by far the greatest influence of the three environmental factors (barometric pressure, temperature and humidity) on power outputs from air-braked ergometers. The barometric pressure correction factor for power outputs from air-braked ergometers is in widespread use but apparently has never been empirically validated. Our experiment validated this correction factor by calibrating two air-braked cycle ergometers in a hypobaric chamber using a dynamic calibration rig. The results showed that if the power output correction for changes in air resistance at barometric pressures corresponding to altitudes of 38, 600, 1,200 and 1,800 m above mean sea level were applied, then the coefficients of variation were 0.8-1.9% over the range of 160-1,597 W. The overall mean error was 3.0 % but this included up to 0.73 % for the propagated error that was associated with errors in the measurement of: a) temperature b) relative humidity c) barometric pressure d) force, distance and angular velocity by the dynamic calibration rig. The overall mean error therefore approximated the +/- 2.0% of true load that was specified by the Laboratory Standards Assistance Scheme of the Australian Sports Commission. The validity of the correction factor for barometric pressure on power output was therefore demonstrated over the altitude range of 38-1,800 m.
Salgado, Iván; Mera-Hernández, Manuel; Chairez, Isaac
2017-11-01
This study addresses the problem of designing an output-based controller to stabilize multi-input multi-output (MIMO) systems in the presence of parametric disturbances as well as uncertainties in the state model and output noise measurements. The controller design includes a linear state transformation which separates uncertainties matched to the control input and the unmatched ones. A differential neural network (DNN) observer produces a nonlinear approximation of the matched perturbation and the unknown states simultaneously in the transformed coordinates. This study proposes the use of the Attractive Ellipsoid Method (AEM) to optimize the gains of the controller and the gain observer in the DNN structure. As a consequence, the obtained control input minimizes the convergence zone for the estimation error. Moreover, the control design uses the estimated disturbance provided by the DNN to obtain a better performance in the stabilization task in comparison with a quasi-minimal output feedback controller based on a Luenberger observer and a sliding mode controller. Numerical results pointed out the advantages obtained by the nonlinear control based on the DNN observer. The first example deals with the stabilization of an academic linear MIMO perturbed system and the second example stabilizes the trajectories of a DC-motor into a predefined operation point. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Dimensional synthesis of a 3-DOF parallel manipulator with full circle rotation
NASA Astrophysics Data System (ADS)
Ni, Yanbing; Wu, Nan; Zhong, Xueyong; Zhang, Biao
2015-07-01
Parallel robots are widely used in the academic and industrial fields. In spite of the numerous achievements in the design and dimensional synthesis of the low-mobility parallel robots, few research efforts are directed towards the asymmetric 3-DOF parallel robots whose end-effector can realize 2 translational and 1 rotational(2T1R) motion. In order to develop a manipulator with the capability of full circle rotation to enlarge the workspace, a new 2T1R parallel mechanism is proposed. The modeling approach and kinematic analysis of this proposed mechanism are investigated. Using the method of vector analysis, the inverse kinematic equations are established. This is followed by a vigorous proof that this mechanism attains an annular workspace through its circular rotation and 2 dimensional translations. Taking the first order perturbation of the kinematic equations, the error Jacobian matrix which represents the mapping relationship between the error sources of geometric parameters and the end-effector position errors is derived. With consideration of the constraint conditions of pressure angles and feasible workspace, the dimensional synthesis is conducted with a goal to minimize the global comprehensive performance index. The dimension parameters making the mechanism to have optimal error mapping and kinematic performance are obtained through the optimization algorithm. All these research achievements lay the foundation for the prototype building of such kind of parallel robots.
The CARIBU EBIS control and synchronization system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickerson, Clayton, E-mail: cdickerson@anl.gov; Peters, Christopher, E-mail: cdickerson@anl.gov
2015-01-09
The Californium Rare Isotope Breeder Upgrade (CARIBU) Electron Beam Ion Source (EBIS) charge breeder has been built and tested. The bases of the CARIBU EBIS electrical system are four voltage platforms on which both DC and pulsed high voltage outputs are controlled. The high voltage output pulses are created with either a combination of a function generator and a high voltage amplifier, or two high voltage DC power supplies and a high voltage solid state switch. Proper synchronization of the pulsed voltages, fundamental to optimizing the charge breeding performance, is achieved with triggering from a digital delay pulse generator. Themore » control system is based on National Instruments realtime controllers and LabVIEW software implementing Functional Global Variables (FGV) to store and access instrument parameters. Fiber optic converters enable network communication and triggering across the platforms.« less
Online 3D Ear Recognition by Combining Global and Local Features.
Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David
2016-01-01
The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%.
Online 3D Ear Recognition by Combining Global and Local Features
Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David
2016-01-01
The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%. PMID:27935955
NASA Astrophysics Data System (ADS)
Broll, J. M.; Fuselier, S. A.; Trattner, K. J.; Steven, P. M.; Burch, J. L.; Giles, B. L.
2017-12-01
Magnetic reconnection at Earth's dayside magnetopause is an essential process in magnetospheric physics. Under southward IMF conditions, reconnection occurs along a thin ribbon across the dayside magnetopause. The location of this ribbon has been studied extensively in terms of global optimization of quantities like reconnecting field energy or magnetic shear, but with expected errors of 1-2 Earth radii these global models give limited context for cases where an observation is near the reconnection line. Building on previous results, which established the cutoff contour method for locating reconnection using in-situ velocity measurements, we examine the effects of MHD-scale waves on reconnection exhaust distributions. We use a test particle exhaust distribution propagated through a globamagnetohydrodynamics model fields and compare with Magnetospheric Multiscale observations of reconnection exhaust.
Xiao, Chuncai; Hao, Kuangrong; Ding, Yongsheng
2014-12-30
This paper creates a bi-directional prediction model to predict the performance of carbon fiber and the productive parameters based on a support vector machine (SVM) and improved particle swarm optimization (IPSO) algorithm (SVM-IPSO). In the SVM, it is crucial to select the parameters that have an important impact on the performance of prediction. The IPSO is proposed to optimize them, and then the SVM-IPSO model is applied to the bi-directional prediction of carbon fiber production. The predictive accuracy of SVM is mainly dependent on its parameters, and IPSO is thus exploited to seek the optimal parameters for SVM in order to improve its prediction capability. Inspired by a cell communication mechanism, we propose IPSO by incorporating information of the global best solution into the search strategy to improve exploitation, and we employ IPSO to establish the bi-directional prediction model: in the direction of the forward prediction, we consider productive parameters as input and property indexes as output; in the direction of the backward prediction, we consider property indexes as input and productive parameters as output, and in this case, the model becomes a scheme design for novel style carbon fibers. The results from a set of the experimental data show that the proposed model can outperform the radial basis function neural network (RNN), the basic particle swarm optimization (PSO) method and the hybrid approach of genetic algorithm and improved particle swarm optimization (GA-IPSO) method in most of the experiments. In other words, simulation results demonstrate the effectiveness and advantages of the SVM-IPSO model in dealing with the problem of forecasting.
Das, Saptarshi; Pan, Indranil; Das, Shantanu
2013-07-01
Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mazur, Krzysztof; Wrona, Stanislaw; Pawelczyk, Marek
2018-01-01
The paper presents the idea and discussion on implementation of multichannel global active noise control systems. As a test plant an active casing is used. It has been developed by the authors to reduce device noise directly at the source by controlling vibration of its casing. To provide global acoustic effect in the whole environment, where the device operates, it requires a number of secondary sources and sensors for each casing wall, thus making the whole active control structure complicated, i.e. with a large number of interacting channels. The paper discloses all details concerning hardware setup and efficient implementation of control algorithms for the multichannel case. A new formulation is presented to introduce the distributed version of the Switched-error Filtered-reference Least Mean Squares (FXLMS) algorithm together with adaptation rate enhancement. The convergence rate of the proposed algorithm is compared with original Multiple-error FXLMS. A number of hints followed from many years of authors' experience on microprocessor control systems design and signal processing algorithms optimization are presented. They can be used for various active control and signal processing applications, both for academic research and commercialization.
JPL-ANTOPT antenna structure optimization program
NASA Technical Reports Server (NTRS)
Strain, D. M.
1994-01-01
New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.
Quantifying global fossil-fuel CO2 emissions: from OCO-2 to optimal observing designs
NASA Astrophysics Data System (ADS)
Ye, X.; Lauvaux, T.; Kort, E. A.; Oda, T.; Feng, S.; Lin, J. C.; Yang, E. G.; Wu, D.; Kuze, A.; Suto, H.; Eldering, A.
2017-12-01
Cities house more than half of the world's population and are responsible for more than 70% of the world anthropogenic CO2 emissions. Therefore, quantifications of emissions from major cities, which are only less than a hundred intense emitting spots across the globe, should allow us to monitor changes in global fossil-fuel CO2 emissions, in an independent, objective way. Satellite platforms provide favorable temporal and spatial coverage to collect urban CO2 data to quantify the anthropogenic contributions to the global carbon budget. We present here the optimal observation design for future NASA's OCO-2 and Japanese GOSAT missions, based on real-data (i.e. OCO-2) experiments and Observing System Simulation Experiments (OSSE's) to address different error components in the urban CO2 budget calculation. We identify the major sources of emission uncertainties for various types of cities with different ecosystems and geographical features, such as urban plumes over flat terrains, accumulated enhancements within basins, and complex weather regimes in coastal areas. Atmospheric transport errors were characterized under various meteorological conditions using the Weather Research and Forecasting (WRF) model at 1-km spatial resolution, coupled to the Open-source Data Inventory for Anthropogenic CO2 (ODIAC) emissions. We propose and discuss the optimized urban sampling strategies to address some difficulties from the seasonality in cloud cover and emissions, vegetation density in and around cities, and address the daytime sampling bias using prescribed diurnal cycles. These factors are combined in pseudo data experiments in which we evaluate the relative impact of uncertainties on inverse estimates of CO2 emissions for cities across latitudinal and climatological zones. We propose here several sampling strategies to minimize the uncertainties in target mode for tracking urban fossil-fuel CO2 emissions over the globe for future satellite missions, such as OCO-3 and future versions of GOSAT.
Beyond Worst-Case Analysis in Privacy and Clustering: Exploiting Explicit and Implicit Assumptions
2013-08-01
Dwork et al [63]. Given a query function f , the curator first estimates the global sensitivity of f , denoted GS(f) = maxD,D′ f(D)− f(D′), then outputs f...Ostrovsky et al [121]. Ostrovsky et al study instances in which the ratio between the cost of the optimal (k − 1)-means solu- tion and the cost of the...k-median objective. We also build on the work of Balcan et al [25] that investigate the connection between point-wise approximations of the target
Research on a new wave energy absorption device
NASA Astrophysics Data System (ADS)
Lu, Zhongyue; Shang, Jianzhong; Luo, Zirong; Sun, Chongfei; Zhu, Yiming
2018-01-01
To reduce impact of global warming and the energy crisis problems caused by pollution of energy combustion, the research on renewable and clean energies becomes more and more important. This paper designed a new wave absorption device, and also gave an introduction on its mechanical structure. The flow tube model is analyzed, and presented the formulation of the proposed method. To verify the principle of wave absorbing device, an experiment was carried out in a laboratory environment, and the results of the experiment can be applied for optimizing the structure design of output power.
Dynamic Sensor Tasking for Space Situational Awareness via Reinforcement Learning
NASA Astrophysics Data System (ADS)
Linares, R.; Furfaro, R.
2016-09-01
This paper studies the Sensor Management (SM) problem for optical Space Object (SO) tracking. The tasking problem is formulated as a Markov Decision Process (MDP) and solved using Reinforcement Learning (RL). The RL problem is solved using the actor-critic policy gradient approach. The actor provides a policy which is random over actions and given by a parametric probability density function (pdf). The critic evaluates the policy by calculating the estimated total reward or the value function for the problem. The parameters of the policy action pdf are optimized using gradients with respect to the reward function. Both the critic and the actor are modeled using deep neural networks (multi-layer neural networks). The policy neural network takes the current state as input and outputs probabilities for each possible action. This policy is random, and can be evaluated by sampling random actions using the probabilities determined by the policy neural network's outputs. The critic approximates the total reward using a neural network. The estimated total reward is used to approximate the gradient of the policy network with respect to the network parameters. This approach is used to find the non-myopic optimal policy for tasking optical sensors to estimate SO orbits. The reward function is based on reducing the uncertainty for the overall catalog to below a user specified uncertainty threshold. This work uses a 30 km total position error for the uncertainty threshold. This work provides the RL method with a negative reward as long as any SO has a total position error above the uncertainty threshold. This penalizes policies that take longer to achieve the desired accuracy. A positive reward is provided when all SOs are below the catalog uncertainty threshold. An optimal policy is sought that takes actions to achieve the desired catalog uncertainty in minimum time. This work trains the policy in simulation by letting it task a single sensor to "learn" from its performance. The proposed approach for the SM problem is tested in simulation and good performance is found using the actor-critic policy gradient method.
Forecasting Electric Power Generation of Photovoltaic Power System for Energy Network
NASA Astrophysics Data System (ADS)
Kudo, Mitsuru; Takeuchi, Akira; Nozaki, Yousuke; Endo, Hisahito; Sumita, Jiro
Recently, there has been an increase in concern about the global environment. Interest is growing in developing an energy network by which new energy systems such as photovoltaic and fuel cells generate power locally and electric power and heat are controlled with a communications network. We developed the power generation forecast method for photovoltaic power systems in an energy network. The method makes use of weather information and regression analysis. We carried out forecasting power output of the photovoltaic power system installed in Expo 2005, Aichi Japan. As a result of comparing measurements with a prediction values, the average prediction error per day was about 26% of the measured power.
A new smooth robust control design for uncertain nonlinear systems with non-vanishing disturbances
NASA Astrophysics Data System (ADS)
Xian, Bin; Zhang, Yao
2016-06-01
In this paper, we consider the control problem for a general class of nonlinear system subjected to uncertain dynamics and non-varnishing disturbances. A smooth nonlinear control algorithm is presented to tackle these uncertainties and disturbances. The proposed control design employs the integral of a nonlinear sigmoid function to compensate the uncertain dynamics, and achieve a uniformly semi-global practical asymptotic stable tracking control of the system outputs. A novel Lyapunov-based stability analysis is employed to prove the convergence of the tracking errors and the stability of the closed-loop system. Numerical simulation results on a two-link robot manipulator are presented to illustrate the performance of the proposed control algorithm comparing with the layer-boundary sliding mode controller and the robust of integration of sign of error control design. Furthermore, real-time experiment results for the attitude control of a quadrotor helicopter are also included to confirm the effectiveness of the proposed algorithm.
Aono, Masashi; Gunji, Yukio-Pegio
2003-10-01
The emergence derived from errors is the key importance for both novel computing and novel usage of the computer. In this paper, we propose an implementable experimental plan for the biological computing so as to elicit the emergent property of complex systems. An individual plasmodium of the true slime mold Physarum polycephalum acts in the slime mold computer. Modifying the Elementary Cellular Automaton as it entails the global synchronization problem upon the parallel computing provides the NP-complete problem solved by the slime mold computer. The possibility to solve the problem by giving neither all possible results nor explicit prescription of solution-seeking is discussed. In slime mold computing, the distributivity in the local computing logic can change dynamically, and its parallel non-distributed computing cannot be reduced into the spatial addition of multiple serial computings. The computing system based on exhaustive absence of the super-system may produce, something more than filling the vacancy.
Sensitivity analysis of periodic errors in heterodyne interferometry
NASA Astrophysics Data System (ADS)
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-03-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.
Quality assessment for VMAT prostate radiotherapy planning based on data envelopment analysis
NASA Astrophysics Data System (ADS)
Lin, Kuan-Min; Simpson, John; Sasso, Giuseppe; Raith, Andrea; Ehrgott, Matthias
2013-08-01
The majority of commercial radiotherapy treatment planning systems requires planners to iteratively adjust the plan parameters in order to find a satisfactory plan. This iterative trial-and-error nature of radiotherapy treatment planning results in an inefficient planning process and in order to reduce such inefficiency, plans can be accepted without achieving the best attainable quality. We propose a quality assessment method based on data envelopment analysis (DEA) to address this inefficiency. This method compares a plan of interest to a set of past delivered plans and searches for evidence of potential further improvement. With the assistance of DEA, planners will be able to make informed decisions on whether further planning is required and ensure that a plan is only accepted when the plan quality is close to the best attainable one. We apply the DEA method to 37 prostate plans using two assessment parameters: rectal generalized equivalent uniform dose (gEUD) as the input and D95 (the minimum dose that is received by 95% volume of a structure) of the planning target volume (PTV) as the output. The percentage volume of rectum overlapping PTV is used to account for anatomical variations between patients and is included in the model as a non-discretionary output variable. Five plans that are considered of lesser quality by DEA are re-optimized with the goal to further improve rectal sparing. After re-optimization, all five plans improve in rectal gEUD without clinically considerable deterioration of the PTV D95 value. For the five re-optimized plans, the rectal gEUD is reduced by an average of 1.84 Gray (Gy) with only an average reduction of 0.07 Gy in PTV D95. The results demonstrate that DEA can correctly identify plans with potential improvements in terms of the chosen input and outputs.
Minimal complexity control law synthesis
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Haddad, Wassim M.; Nett, Carl N.
1989-01-01
A paradigm for control law design for modern engineering systems is proposed: Minimize control law complexity subject to the achievement of a specified accuracy in the face of a specified level of uncertainty. Correspondingly, the overall goal is to make progress towards the development of a control law design methodology which supports this paradigm. Researchers achieve this goal by developing a general theory of optimal constrained-structure dynamic output feedback compensation, where here constrained-structure means that the dynamic-structure (e.g., dynamic order, pole locations, zero locations, etc.) of the output feedback compensation is constrained in some way. By applying this theory in an innovative fashion, where here the indicated iteration occurs over the choice of the compensator dynamic-structure, the paradigm stated above can, in principle, be realized. The optimal constrained-structure dynamic output feedback problem is formulated in general terms. An elegant method for reducing optimal constrained-structure dynamic output feedback problems to optimal static output feedback problems is then developed. This reduction procedure makes use of star products, linear fractional transformations, and linear fractional decompositions, and yields as a byproduct a complete characterization of the class of optimal constrained-structure dynamic output feedback problems which can be reduced to optimal static output feedback problems. Issues such as operational/physical constraints, operating-point variations, and processor throughput/memory limitations are considered, and it is shown how anti-windup/bumpless transfer, gain-scheduling, and digital processor implementation can be facilitated by constraining the controller dynamic-structure in an appropriate fashion.
NASA Astrophysics Data System (ADS)
Allen, Douglas R.; Hoppel, Karl W.; Kuhl, David D.
2018-03-01
Extraction of wind and temperature information from stratospheric ozone assimilation is examined within the context of the Navy Global Environmental Model (NAVGEM) hybrid 4-D variational assimilation (4D-Var) data assimilation (DA) system. Ozone can improve the wind and temperature through two different DA mechanisms: (1) through the flow-of-the-day
ensemble background error covariance that is blended together with the static background error covariance and (2) via the ozone continuity equation in the tangent linear model and adjoint used for minimizing the cost function. All experiments assimilate actual conventional data in order to maintain a similar realistic troposphere. In the stratosphere, the experiments assimilate simulated ozone and/or radiance observations in various combinations. The simulated observations are constructed for a case study based on a 16-day cycling truth experiment (TE), which is an analysis with no stratospheric observations. The impact of ozone on the analysis is evaluated by comparing the experiments to the TE for the last 6 days, allowing for a 10-day spin-up. Ozone assimilation benefits the wind and temperature when data are of sufficient quality and frequency. For example, assimilation of perfect (no applied error) global hourly ozone data constrains the stratospheric wind and temperature to within ˜ 2 m s-1 and ˜ 1 K. This demonstrates that there is dynamical information in the ozone distribution that can potentially be used to improve the stratosphere. This is particularly important for the tropics, where radiance observations have difficulty constraining wind due to breakdown of geostrophic balance. Global ozone assimilation provides the largest benefit when the hybrid blending coefficient is an intermediate value (0.5 was used in this study), rather than 0.0 (no ensemble background error covariance) or 1.0 (no static background error covariance), which is consistent with other hybrid DA studies. When perfect global ozone is assimilated in addition to radiance observations, wind and temperature error decreases of up to ˜ 3 m s-1 and ˜ 1 K occur in the tropical upper stratosphere. Assimilation of noisy global ozone (2 % errors applied) results in error reductions of ˜ 1 m s-1 and ˜ 0.5 K in the tropics and slightly increased temperature errors in the Northern Hemisphere polar region. Reduction of the ozone sampling frequency also reduces the benefit of ozone throughout the stratosphere, with noisy polar-orbiting data having only minor impacts on wind and temperature when assimilated with radiances. An examination of ensemble cross-correlations between ozone and other variables shows that a single ozone observation behaves like a potential vorticity (PV) charge
, or a monopole of PV, with rotation about a vertical axis and vertically oriented temperature dipole. Further understanding of this relationship may help in designing observation systems that would optimize the impact of ozone on the dynamics.
iGen: An automated generator of simplified models with provable error bounds.
NASA Astrophysics Data System (ADS)
Tang, D.; Dobbie, S.
2009-04-01
Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.
A Globally Optimal Particle Tracking Technique for Stereo Imaging Velocimetry Experiments
NASA Technical Reports Server (NTRS)
McDowell, Mark
2008-01-01
An important phase of any Stereo Imaging Velocimetry experiment is particle tracking. Particle tracking seeks to identify and characterize the motion of individual particles entrained in a fluid or air experiment. We analyze a cylindrical chamber filled with water and seeded with density-matched particles. In every four-frame sequence, we identify a particle track by assigning a unique track label for each camera image. The conventional approach to particle tracking is to use an exhaustive tree-search method utilizing greedy algorithms to reduce search times. However, these types of algorithms are not optimal due to a cascade effect of incorrect decisions upon adjacent tracks. We examine the use of a guided evolutionary neural net with simulated annealing to arrive at a globally optimal assignment of tracks. The net is guided both by the minimization of the search space through the use of prior limiting assumptions about valid tracks and by a strategy which seeks to avoid high-energy intermediate states which can trap the net in a local minimum. A stochastic search algorithm is used in place of back-propagation of error to further reduce the chance of being trapped in an energy well. Global optimization is achieved by minimizing an objective function, which includes both track smoothness and particle-image utilization parameters. In this paper we describe our model and present our experimental results. We compare our results with a nonoptimizing, predictive tracker and obtain an average increase in valid track yield of 27 percent
An accurate system for onsite calibration of electronic transformers with digital output.
Zhi, Zhang; Li, Hong-Bin
2012-06-01
Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.
An accurate system for onsite calibration of electronic transformers with digital output
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhi Zhang; Li Hongbin; State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan 430074
Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differentialmore » method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.« less
An accurate system for onsite calibration of electronic transformers with digital output
NASA Astrophysics Data System (ADS)
Zhi, Zhang; Li, Hong-Bin
2012-06-01
Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passarge, M; Fix, M K; Manser, P
Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling andmore » translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error source. J. V. Siebers receives funding support from Varian Medical Systems.« less
Image denoising in mixed Poisson-Gaussian noise.
Luisier, Florian; Blu, Thierry; Unser, Michael
2011-03-01
We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.
NASA Astrophysics Data System (ADS)
Wang, Xianxun; Mei, Yadong
2017-04-01
Coordinative operation of hydro-wind-photovoltaic is the solution of mitigating the conflict of power generation and output fluctuation of new energy and conquering the bottleneck of new energy development. Due to the deficiencies of characterizing output fluctuation, depicting grid construction and disposal of power abandon, the research of coordinative mechanism is influenced. In this paper, the multi-object and multi-hierarchy model of coordinative operation of hydro-wind-photovoltaic is built with the aim of maximizing power generation and minimizing output fluctuation and the constraints of topotaxy of power grid and balanced disposal of power abandon. In the case study, the comparison of uncoordinative and coordinative operation is carried out with the perspectives of power generation, power abandon and output fluctuation. By comparison from power generation, power abandon and output fluctuation between separate operation and coordinative operation of multi-power, the coordinative mechanism is studied. Compared with running solely, coordinative operation of hydro-wind-photovoltaic can gain the compensation benefits. Peak-alternation operation reduces the power abandon significantly and maximizes resource utilization effectively by compensating regulation of hydropower. The Pareto frontier of power generation and output fluctuation is obtained through multiple-objective optimization. It clarifies the relationship of mutual influence between these two objects. When coordinative operation is taken, output fluctuation can be markedly reduced at the cost of a slight decline of power generation. The power abandon also drops sharply compared with operating separately. Applying multi-objective optimization method to optimize the coordinate operation, Pareto optimal solution set of power generation and output fluctuation is achieved.
An Efficient Downlink Scheduling Strategy Using Normal Graphs for Multiuser MIMO Wireless Systems
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh; Wu, Cheng-Hsuan; Lee, Yao-Nan; Wen, Chao-Kai
Inspired by the success of the low-density parity-check (LDPC) codes in the field of error-control coding, in this paper we propose transforming the downlink multiuser multiple-input multiple-output scheduling problem into an LDPC-like problem using the normal graph. Based on the normal graph framework, soft information, which indicates the probability that each user will be scheduled to transmit packets at the access point through a specified angle-frequency sub-channel, is exchanged among the local processors to iteratively optimize the multiuser transmission schedule. Computer simulations show that the proposed algorithm can efficiently schedule simultaneous multiuser transmission which then increases the overall channel utilization and reduces the average packet delay.
Analytical and experimental design and analysis of an optimal processor for image registration
NASA Technical Reports Server (NTRS)
Mcgillem, C. D. (Principal Investigator); Svedlow, M.; Anuta, P. E.
1976-01-01
The author has identified the following significant results. A quantitative measure of the registration processor accuracy in terms of the variance of the registration error was derived. With the appropriate assumptions, the variance was shown to be inversely proportional to the square of the effective bandwidth times the signal to noise ratio. The final expressions were presented to emphasize both the form and simplicity of their representation. In the situation where relative spatial distortions exist between images to be registered, expressions were derived for estimating the loss in output signal to noise ratio due to these spatial distortions. These results are in terms of a reduction factor.
Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers
NASA Astrophysics Data System (ADS)
Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen
2017-04-01
Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.
Experimental validation of wireless communication with chaos.
Ren, Hai-Peng; Bai, Chao; Liu, Jian; Baptista, Murilo S; Grebogi, Celso
2016-08-01
The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and an integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.
Experimental validation of wireless communication with chaos
NASA Astrophysics Data System (ADS)
Ren, Hai-Peng; Bai, Chao; Liu, Jian; Baptista, Murilo S.; Grebogi, Celso
2016-08-01
The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and an integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.
Experimental validation of wireless communication with chaos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Hai-Peng; Bai, Chao; Liu, Jian
The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and anmore » integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.« less
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
NASA Astrophysics Data System (ADS)
Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca
2010-05-01
The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance distributed model developed at the Department of Civil and Environmental Engineering of the University of Florence. Discussion on the comparisons between the effectiveness of the different algorithms on different cases of study on Central Italy basins is provided.
Regional to global changes in drought and implications for future changes under global warming
NASA Astrophysics Data System (ADS)
Sheffield, J.; Wood, E. F.; Kam, J.
2012-12-01
Drought can have large impacts on multiple sectors, including agriculture, water resources, ecosystems, transport, industry and tourism. In extreme cases, regional drought can lead to food insecurity and famine, and in intensive agricultural regions, extend to global economic impacts in a connected world. Recent droughts globally have been severe and costly but whether they are becoming more frequent and severe, and the attribution of this, is a key question. Observational evidence at large scales, such as satellite remote sensing are often subject to short-term records and inhomogeneities, and ground based data are sparse in many regions. Reliance on model output is also subject to error and simplifications in the model physics that can, for example, amplify the impact of global warming on drought. This presentation will show the observational and model evidence for changes in drought, with a focus on the interplay between precipitation and atmospheric evaporative demand and its impact on the terrestrial water cycle and drought. We discuss the fidelity of climate models to reproduce our best estimates of drought variability and its drivers historically, and the implications of this on uncertainties in future projections of drought from CMIP5 models, and how this has changed since CMIP3.
Error measuring system of rotary Inductosyn
NASA Astrophysics Data System (ADS)
Liu, Chengjun; Zou, Jibin; Fu, Xinghe
2008-10-01
The inductosyn is a kind of high-precision angle-position sensor. It has important applications in servo table, precision machine tool and other products. The precision of inductosyn is calibrated by its error. It's an important problem about the error measurement in the process of production and application of the inductosyn. At present, it mainly depends on the method of artificial measurement to obtain the error of inductosyn. Therefore, the disadvantages can't be ignored such as the high labour intensity of the operator, the occurrent error which is easy occurred and the poor repeatability, and so on. In order to solve these problems, a new automatic measurement method is put forward in this paper which based on a high precision optical dividing head. Error signal can be obtained by processing the output signal of inductosyn and optical dividing head precisely. When inductosyn rotating continuously, its zero position error can be measured dynamically, and zero error curves can be output automatically. The measuring and calculating errors caused by man-made factor can be overcome by this method, and it makes measuring process more quickly, exactly and reliably. Experiment proves that the accuracy of error measuring system is 1.1 arc-second (peak - peak value).
Liu, Lei; Wang, Zhanshan; Zhang, Huaguang
2018-04-01
This paper is concerned with the robust optimal tracking control strategy for a class of nonlinear multi-input multi-output discrete-time systems with unknown uncertainty via adaptive critic design (ACD) scheme. The main purpose is to establish an adaptive actor-critic control method, so that the cost function in the procedure of dealing with uncertainty is minimum and the closed-loop system is stable. Based on the neural network approximator, an action network is applied to generate the optimal control signal and a critic network is used to approximate the cost function, respectively. In contrast to the previous methods, the main features of this paper are: 1) the ACD scheme is integrated into the controllers to cope with the uncertainty and 2) a novel cost function, which is not in quadric form, is proposed so that the total cost in the design procedure is reduced. It is proved that the optimal control signals and the tracking errors are uniformly ultimately bounded even when the uncertainty exists. Finally, a numerical simulation is developed to show the effectiveness of the present approach.
Program to Optimize Simulated Trajectories (POST). Volume 2: Utilization manual
NASA Technical Reports Server (NTRS)
Bauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.
1975-01-01
Information pertinent to users of the program to optimize simulated trajectories (POST) is presented. The input required and output available is described for each of the trajectory and targeting/optimization options. A sample input listing and resulting output are given.
NASA Astrophysics Data System (ADS)
Altenau, Elizabeth H.; Pavelsky, Tamlin M.; Moller, Delwyn; Lion, Christine; Pitcher, Lincoln H.; Allen, George H.; Bates, Paul D.; Calmant, Stéphane; Durand, Michael; Neal, Jeffrey C.; Smith, Laurence C.
2017-04-01
Anabranching rivers make up a large proportion of the world's major rivers, but quantifying their flow dynamics is challenging due to their complex morphologies. Traditional in situ measurements of water levels collected at gauge stations cannot capture out of bank flows and are limited to defined cross sections, which presents an incomplete picture of water fluctuations in multichannel systems. Similarly, current remotely sensed measurements of water surface elevations (WSEs) and slopes are constrained by resolutions and accuracies that limit the visibility of surface waters at global scales. Here, we present new measurements of river WSE and slope along the Tanana River, AK, acquired from AirSWOT, an airborne analogue to the Surface Water and Ocean Topography (SWOT) mission. Additionally, we compare the AirSWOT observations to hydrodynamic model outputs of WSE and slope simulated across the same study area. Results indicate AirSWOT errors are significantly lower than model outputs. When compared to field measurements, RMSE for AirSWOT measurements of WSEs is 9.0 cm when averaged over 1 km squared areas and 1.0 cm/km for slopes along 10 km reaches. Also, AirSWOT can accurately reproduce the spatial variations in slope critical for characterizing reach-scale hydraulics, while model outputs of spatial variations in slope are very poor. Combining AirSWOT and future SWOT measurements with hydrodynamic models can result in major improvements in model simulations at local to global scales. Scientists can use AirSWOT measurements to constrain model parameters over long reach distances, improve understanding of the physical processes controlling the spatial distribution of model parameters, and validate models' abilities to reproduce spatial variations in slope. Additionally, AirSWOT and SWOT measurements can be assimilated into lower-complexity models to try and approach the accuracies achieved by higher-complexity models.
A Mechanism for Error Detection in Speeded Response Time Tasks
ERIC Educational Resources Information Center
Holroyd, Clay B.; Yeung, Nick; Coles, Michael G. H.; Cohen, Jonathan D.
2005-01-01
The concept of error detection plays a central role in theories of executive control. In this article, the authors present a mechanism that can rapidly detect errors in speeded response time tasks. This error monitor assigns values to the output of cognitive processes involved in stimulus categorization and response generation and detects errors…
Design and Implementation of an Intrinsically Safe Liquid-Level Sensor Using Coaxial Cable
Jin, Baoquan; Liu, Xin; Bai, Qing; Wang, Dong; Wang, Yu
2015-01-01
Real-time detection of liquid level in complex environments has always been a knotty issue. In this paper, an intrinsically safe liquid-level sensor system for flammable and explosive environments is designed and implemented. The poly vinyl chloride (PVC) coaxial cable is chosen as the sensing element and the measuring mechanism is analyzed. Then, the capacitance-to-voltage conversion circuit is designed and the expected output signal is achieved by adopting parameter optimization. Furthermore, the experimental platform of the liquid-level sensor system is constructed, which involves the entire process of measuring, converting, filtering, processing, visualizing and communicating. Additionally, the system is designed with characteristics of intrinsic safety by limiting the energy of the circuit to avoid or restrain the thermal effects and sparks. Finally, the approach of the piecewise linearization is adopted in order to improve the measuring accuracy by matching the appropriate calibration points. The test results demonstrate that over the measurement range of 1.0 m, the maximum nonlinearity error is 0.8% full-scale span (FSS), the maximum repeatability error is 0.5% FSS, and the maximum hysteresis error is reduced from 0.7% FSS to 0.5% FSS by applying software compensation algorithms. PMID:26029949
Design and implementation of an intrinsically safe liquid-level sensor using coaxial cable.
Jin, Baoquan; Liu, Xin; Bai, Qing; Wang, Dong; Wang, Yu
2015-05-28
Real-time detection of liquid level in complex environments has always been a knotty issue. In this paper, an intrinsically safe liquid-level sensor system for flammable and explosive environments is designed and implemented. The poly vinyl chloride (PVC) coaxial cable is chosen as the sensing element and the measuring mechanism is analyzed. Then, the capacitance-to-voltage conversion circuit is designed and the expected output signal is achieved by adopting parameter optimization. Furthermore, the experimental platform of the liquid-level sensor system is constructed, which involves the entire process of measuring, converting, filtering, processing, visualizing and communicating. Additionally, the system is designed with characteristics of intrinsic safety by limiting the energy of the circuit to avoid or restrain the thermal effects and sparks. Finally, the approach of the piecewise linearization is adopted in order to improve the measuring accuracy by matching the appropriate calibration points. The test results demonstrate that over the measurement range of 1.0 m, the maximum nonlinearity error is 0.8% full-scale span (FSS), the maximum repeatability error is 0.5% FSS, and the maximum hysteresis error is reduced from 0.7% FSS to 0.5% FSS by applying software compensation algorithms.
A novel optical fiber displacement sensor of wider measurement range based on neural network
NASA Astrophysics Data System (ADS)
Guo, Yuan; Dai, Xue Feng; Wang, Yu Tian
2006-02-01
By studying on the output characteristics of random type optical fiber sensor and semicircular type optical fiber sensor, the ratio of the two output signals was used as the output signal of the whole system. Then the measurement range was enlarged, the linearity was improved, and the errors of reflective and absorbent changing of target surface are automatically compensated. Meantime, an optical fiber sensor model of correcting static error based on BP artificial neural network(ANN) is set up. So the intrinsic errors such as effects of fluctuations in the light, circuit excursion, the intensity losses in the fiber lines and the additional losses in the receiving fiber caused by bends are eliminated. By discussing in theory and experiment, the error of nonlinear is 2.9%, the measuring range reaches to 5-6mm and the relative accuracy is 2%.And this sensor has such characteristics as no electromagnetic interference, simple construction, high sensitivity, good accuracy and stability. Also the multi-point sensor system can be used to on-line and non-touch monitor in working locales.
Tashkova, Katerina; Korošec, Peter; Silc, Jurij; Todorovski, Ljupčo; Džeroski, Sašo
2011-10-11
We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and artificial data, for all observability scenarios considered, and for all amounts of noise added to the artificial data. In sum, the meta-heuristic methods considered are suitable for estimating the parameters in the ODE model of the dynamics of endocytosis under a range of conditions: With the model and conditions being representative of parameter estimation tasks in ODE models of biochemical systems, our results clearly highlight the promise of bio-inspired meta-heuristic methods for parameter estimation in dynamic system models within system biology.
2011-01-01
Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and artificial data, for all observability scenarios considered, and for all amounts of noise added to the artificial data. In sum, the meta-heuristic methods considered are suitable for estimating the parameters in the ODE model of the dynamics of endocytosis under a range of conditions: With the model and conditions being representative of parameter estimation tasks in ODE models of biochemical systems, our results clearly highlight the promise of bio-inspired meta-heuristic methods for parameter estimation in dynamic system models within system biology. PMID:21989196
Qiao, Wei; Venayagamoorthy, Ganesh K; Harley, Ronald G
2008-01-01
Wide-area coordinating control is becoming an important issue and a challenging problem in the power industry. This paper proposes a novel optimal wide-area coordinating neurocontrol (WACNC), based on wide-area measurements, for a power system with power system stabilizers, a large wind farm and multiple flexible ac transmission system (FACTS) devices. An optimal wide-area monitor (OWAM), which is a radial basis function neural network (RBFNN), is designed to identify the input-output dynamics of the nonlinear power system. Its parameters are optimized through particle swarm optimization (PSO). Based on the OWAM, the WACNC is then designed by using the dual heuristic programming (DHP) method and RBFNNs, while considering the effect of signal transmission delays. The WACNC operates at a global level to coordinate the actions of local power system controllers. Each local controller communicates with the WACNC, receives remote control signals from the WACNC to enhance its dynamic performance and therefore helps improve system-wide dynamic and transient performance. The proposed control is verified by simulation studies on a multimachine power system.
Mars Mission Optimization Based on Collocation of Resources
NASA Technical Reports Server (NTRS)
Chamitoff, G. E.; James, G. H.; Barker, D. C.; Dershowitz, A. L.
2003-01-01
This paper presents a powerful approach for analyzing Martian data and for optimizing mission site selection based on resource collocation. This approach is implemented in a program called PROMT (Planetary Resource Optimization and Mapping Tool), which provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in-situ resource utilization. Optimization results are shown for a number of mission scenarios.
Multi-petascale highly efficient parallel supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.
A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time andmore » supports DMA functionality allowing for parallel processing message-passing.« less
Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih
2015-11-01
This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cooperative optimization and their application in LDPC codes
NASA Astrophysics Data System (ADS)
Chen, Ke; Rong, Jian; Zhong, Xiaochun
2008-10-01
Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.
NASA Astrophysics Data System (ADS)
Tiwari, Shivendra N.; Padhi, Radhakant
2018-01-01
Following the philosophy of adaptive optimal control, a neural network-based state feedback optimal control synthesis approach is presented in this paper. First, accounting for a nominal system model, a single network adaptive critic (SNAC) based multi-layered neural network (called as NN1) is synthesised offline. However, another linear-in-weight neural network (called as NN2) is trained online and augmented to NN1 in such a manner that their combined output represent the desired optimal costate for the actual plant. To do this, the nominal model needs to be updated online to adapt to the actual plant, which is done by synthesising yet another linear-in-weight neural network (called as NN3) online. Training of NN3 is done by utilising the error information between the nominal and actual states and carrying out the necessary Lyapunov stability analysis using a Sobolev norm based Lyapunov function. This helps in training NN2 successfully to capture the required optimal relationship. The overall architecture is named as 'Dynamically Re-optimised single network adaptive critic (DR-SNAC)'. Numerical results for two motivating illustrative problems are presented, including comparison studies with closed form solution for one problem, which clearly demonstrate the effectiveness and benefit of the proposed approach.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing
2018-01-15
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.
A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2015-01-01
A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
Sensitivity of geographic information system outputs to errors in remotely sensed data
NASA Technical Reports Server (NTRS)
Ramapriyan, H. K.; Boyd, R. K.; Gunther, F. J.; Lu, Y. C.
1981-01-01
The sensitivity of the outputs of a geographic information system (GIS) to errors in inputs derived from remotely sensed data (RSD) is investigated using a suitability model with per-cell decisions and a gridded geographic data base whose cells are larger than the RSD pixels. The process of preparing RSD as input to a GIS is analyzed, and the errors associated with classification and registration are examined. In the case of the model considered, it is found that the errors caused during classification and registration are partially compensated by the aggregation of pixels. The compensation is quantified by means of an analytical model, a Monte Carlo simulation, and experiments with Landsat data. The results show that error reductions of the order of 50% occur because of aggregation when 25 pixels of RSD are used per cell in the geographic data base.
Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.