Sample records for state estimation techniques

  1. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  2. UAV State Estimation Modeling Techniques in AHRS

    NASA Astrophysics Data System (ADS)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  3. Empirical State Error Covariance Matrix for Batch Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  4. Accuracy of Noninvasive Estimation Techniques for the State of the Cochlear Amplifier

    NASA Astrophysics Data System (ADS)

    Dalhoff, Ernst; Gummer, Anthony W.

    2011-11-01

    Estimation of the function of the cochlea in human is possible only by deduction from indirect measurements, which may be subjective or objective. Therefore, for basic research as well as diagnostic purposes, it is important to develop methods to deduce and analyse error sources of cochlear-state estimation techniques. Here, we present a model of technical and physiologic error sources contributing to the estimation accuracy of hearing threshold and the state of the cochlear amplifier and deduce from measurements of human that the estimated standard deviation can be considerably below 6 dB. Experimental evidence is drawn from two partly independent objective estimation techniques for the auditory signal chain based on measurements of otoacoustic emissions.

  5. Comparison study on disturbance estimation techniques in precise slow motion control

    NASA Astrophysics Data System (ADS)

    Fan, S.; Nagamune, R.; Altintas, Y.; Fan, D.; Zhang, Z.

    2010-08-01

    Precise low speed motion control is important for the industrial applications of both micro-milling machine tool feed drives and electro-optical tracking servo systems. It calls for precise position and instantaneous velocity measurement and disturbance, which involves direct drive motor force ripple, guide way friction and cutting force etc., estimation. This paper presents a comparison study on dynamic response and noise rejection performance of three existing disturbance estimation techniques, including the time-delayed estimators, the state augmented Kalman Filters and the conventional disturbance observers. The design technique essentials of these three disturbance estimators are introduced. For designing time-delayed estimators, it is proposed to substitute Kalman Filter for Luenberger state observer to improve noise suppression performance. The results show that the noise rejection performances of the state augmented Kalman Filters and the time-delayed estimators are much better than the conventional disturbance observers. These two estimators can give not only the estimation of the disturbance but also the low noise level estimations of position and instantaneous velocity. The bandwidth of the state augmented Kalman Filters is wider than the time-delayed estimators. In addition, the state augmented Kalman Filters can give unbiased estimations of the slow varying disturbance and the instantaneous velocity, while the time-delayed estimators can not. The simulation and experiment conducted on X axis of a 2.5-axis prototype micro milling machine are provided.

  6. A spline-based parameter and state estimation technique for static models of elastic surfaces

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Daniel, P. L.; Armstrong, E. S.

    1983-01-01

    Parameter and state estimation techniques for an elliptic system arising in a developmental model for the antenna surface in the Maypole Hoop/Column antenna are discussed. A computational algorithm based on spline approximations for the state and elastic parameters is given and numerical results obtained using this algorithm are summarized.

  7. A Comparison of Anthropogenic Carbon Dioxide Emissions Datasets: UND and CDIAC

    NASA Astrophysics Data System (ADS)

    Gregg, J. S.; Andres, R. J.

    2005-05-01

    Using data from the Department of Energy's Energy Information Administration (EIA), a technique is developed to estimate the monthly consumption of solid, liquid and gaseous fossil fuels for each state in the union. This technique employs monthly sales data to estimate the relative monthly proportions of the total annual carbon dioxide emissions from fossil-fuel use for all states in the union. The University of North Dakota (UND) results are compared to those published by Carbon Dioxide Information Analysis Center (CDIAC) at the Oak Ridge National Laboratory (ORNL). Recently, annual emissions per U.S. state (Blasing, Broniak, Marland, 2004a) as well as monthly CO2 emissions for the United States (Blasing, Broniak, Marland, 2004b) have been added to the CDIAC website. To determine the success of this technique, the individual state results are compared to the annual state totals calculated by CDIAC. In addition, the monthly country totals are compared with those produced by CDIAC. In general, the UND technique produces estimates that are consistent with those available on the CDIAC Trends website. Comparing the results from these two methods permits an improved understanding of the strengths and shortcomings of both estimation techniques. The primary advantages of the UND approach are its ease of implementation, the improved spatial and temporal resolution it can produce, and its universal applicability.

  8. Evaluation of gravimetric techniques to estimate the microvascular filtration coefficient

    PubMed Central

    Dongaonkar, R. M.; Laine, G. A.; Stewart, R. H.

    2011-01-01

    Microvascular permeability to water is characterized by the microvascular filtration coefficient (Kf). Conventional gravimetric techniques to estimate Kf rely on data obtained from either transient or steady-state increases in organ weight in response to increases in microvascular pressure. Both techniques result in considerably different estimates and neither account for interstitial fluid storage and lymphatic return. We therefore developed a theoretical framework to evaluate Kf estimation techniques by 1) comparing conventional techniques to a novel technique that includes effects of interstitial fluid storage and lymphatic return, 2) evaluating the ability of conventional techniques to reproduce Kf from simulated gravimetric data generated by a realistic interstitial fluid balance model, 3) analyzing new data collected from rat intestine, and 4) analyzing previously reported data. These approaches revealed that the steady-state gravimetric technique yields estimates that are not directly related to Kf and are in some cases directly proportional to interstitial compliance. However, the transient gravimetric technique yields accurate estimates in some organs, because the typical experimental duration minimizes the effects of interstitial fluid storage and lymphatic return. Furthermore, our analytical framework reveals that the supposed requirement of tying off all draining lymphatic vessels for the transient technique is unnecessary. Finally, our numerical simulations indicate that our comprehensive technique accurately reproduces the value of Kf in all organs, is not confounded by interstitial storage and lymphatic return, and provides corroboration of the estimate from the transient technique. PMID:21346245

  9. Least-squares sequential parameter and state estimation for large space structures

    NASA Technical Reports Server (NTRS)

    Thau, F. E.; Eliazov, T.; Montgomery, R. C.

    1982-01-01

    This paper presents the formulation of simultaneous state and parameter estimation problems for flexible structures in terms of least-squares minimization problems. The approach combines an on-line order determination algorithm, with least-squares algorithms for finding estimates of modal approximation functions, modal amplitudes, and modal parameters. The approach combines previous results on separable nonlinear least squares estimation with a regression analysis formulation of the state estimation problem. The technique makes use of sequential Householder transformations. This allows for sequential accumulation of matrices required during the identification process. The technique is used to identify the modal prameters of a flexible beam.

  10. Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.

  11. Developing a Fundamental Model for an Integrated GPS/INS State Estimation System with Kalman Filtering

    NASA Technical Reports Server (NTRS)

    Canfield, Stephen

    1999-01-01

    This work will demonstrate the integration of sensor and system dynamic data and their appropriate models using an optimal filter to create a robust, adaptable, easily reconfigurable state (motion) estimation system. This state estimation system will clearly show the application of fundamental modeling and filtering techniques. These techniques are presented at a general, first principles level, that can easily be adapted to specific applications. An example of such an application is demonstrated through the development of an integrated GPS/INS navigation system. This system acquires both global position data and inertial body data, to provide optimal estimates of current position and attitude states. The optimal states are estimated using a Kalman filter. The state estimation system will include appropriate error models for the measurement hardware. The results of this work will lead to the development of a "black-box" state estimation system that supplies current motion information (position and attitude states) that can be used to carry out guidance and control strategies. This black-box state estimation system is developed independent of the vehicle dynamics and therefore is directly applicable to a variety of vehicles. Issues in system modeling and application of Kalman filtering techniques are investigated and presented. These issues include linearized models of equations of state, models of the measurement sensors, and appropriate application and parameter setting (tuning) of the Kalman filter. The general model and subsequent algorithm is developed in Matlab for numerical testing. The results of this system are demonstrated through application to data from the X-33 Michael's 9A8 mission and are presented in plots and simple animations.

  12. Toward quantitative estimation of material properties with dynamic mode atomic force microscopy: a comparative study.

    PubMed

    Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti

    2017-08-11

    In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.

  13. Stochastic parameter estimation in nonlinear time-delayed vibratory systems with distributed delay

    NASA Astrophysics Data System (ADS)

    Torkamani, Shahab; Butcher, Eric A.

    2013-07-01

    The stochastic estimation of parameters and states in linear and nonlinear time-delayed vibratory systems with distributed delay is explored. The approach consists of first employing a continuous time approximation to approximate the delayed integro-differential system with a large set of ordinary differential equations having stochastic excitations. Then the problem of state and parameter estimation in the resulting stochastic ordinary differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the augmented filtering problem, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states. Similarly, the upper bound of the distributed delay can also be estimated by the proposed technique. As an illustrative example to a practical problem in vibrations, the parameter, delay upper bound, and state estimation from noise-corrupted measurements in a distributed force model widely used for modeling machine tool vibrations in the turning operation is investigated.

  14. A Particle Smoother with Sequential Importance Resampling for soil hydraulic parameter estimation: A lysimeter experiment

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry

    2013-04-01

    An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.

  15. Irrigated rice area estimation using remote sensing techniques: Project's proposal and preliminary results. [Rio Grande do Sul, Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Deassuncao, G. V.; Moreira, M. A.; Novaes, R. A.

    1984-01-01

    The development of a methodology for annual estimates of irrigated rice crop in the State of Rio Grande do Sul, Brazil, using remote sensing techniques is proposed. The project involves interpretation, digital analysis, and sampling techniques of LANDSAT imagery. Results are discussed from a preliminary phase for identifying and evaluating irrigated rice crop areas in four counties of the State, for the crop year 1982/1983. This first phase involved just visual interpretation techniques of MSS/LANDSAT images.

  16. Development of advanced techniques for rotorcraft state estimation and parameter identification

    NASA Technical Reports Server (NTRS)

    Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.

    1980-01-01

    An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.

  17. Comparison of stability and control parameters for a light, single-engine, high-winged aircraft using different flight test and parameter estimation techniques

    NASA Technical Reports Server (NTRS)

    Suit, W. T.; Cannaday, R. L.

    1979-01-01

    The longitudinal and lateral stability and control parameters for a high wing, general aviation, airplane are examined. Estimations using flight data obtained at various flight conditions within the normal range of the aircraft are presented. The estimations techniques, an output error technique (maximum likelihood) and an equation error technique (linear regression), are presented. The longitudinal static parameters are estimated from climbing, descending, and quasi steady state flight data. The lateral excitations involve a combination of rudder and ailerons. The sensitivity of the aircraft modes of motion to variations in the parameter estimates are discussed.

  18. RCRA/UST, superfund, and EPCRA hotline training module. Introduction to toxics release inventory: Estimating releases (EPCRA section 313; 40 CFR part 372). Updated as of November 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-03-01

    The module provides an overview of general techniques that owners and operators of reporting facilities may use to estimate their toxic chemical releases. It exlains the basic release estimation techniques used to determine the chemical quantities reported on the Form R and uses those techniques, along with fundamental chemical or physical principles and properties, to estimate releases of listed toxic chemicals. It converts units of mass, volume, and time. It states the rules governing significant figures and rounding techniques, and references general and industry-specific estimation documents.

  19. Quantum-classical boundary for precision optical phase estimation

    NASA Astrophysics Data System (ADS)

    Birchall, Patrick M.; O'Brien, Jeremy L.; Matthews, Jonathan C. F.; Cable, Hugo

    2017-12-01

    Understanding the fundamental limits on the precision to which an optical phase can be estimated is of key interest for many investigative techniques utilized across science and technology. We study the estimation of a fixed optical phase shift due to a sample which has an associated optical loss, and compare phase estimation strategies using classical and nonclassical probe states. These comparisons are based on the attainable (quantum) Fisher information calculated per number of photons absorbed or scattered by the sample throughout the sensing process. We find that for a given number of incident photons upon the unknown phase, nonclassical techniques in principle provide less than a 20 % reduction in root-mean-square error (RMSE) in comparison with ideal classical techniques in multipass optical setups. Using classical techniques in a different optical setup that we analyze, which incorporates additional stages of interference during the sensing process, the achievable reduction in RMSE afforded by nonclassical techniques falls to only ≃4 % . We explain how these conclusions change when nonclassical techniques are compared to classical probe states in nonideal multipass optical setups, with additional photon losses due to the measurement apparatus.

  20. Estimating Power System Dynamic States Using Extended Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu; Schneider, Kevin P.; Nieplocha, Jaroslaw

    2014-10-31

    Abstract—The state estimation tools which are currently deployed in power system control rooms are based on a steady state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper investigates the application of Extended Kalman Filtering techniques for estimating dynamic states in the state estimation process. The new formulated “dynamic state estimation” includes true system dynamics reflected in differential equations, not like previously proposed “dynamic state estimation” which only considers the time-variant snapshots based on steady state modeling. This newmore » dynamic state estimation using Extended Kalman Filter has been successfully tested on a multi-machine system. Sensitivity studies with respect to noise levels, sampling rates, model errors, and parameter errors are presented as well to illustrate the robust performance of the developed dynamic state estimation process.« less

  1. Spring Small Grains Area Estimation

    NASA Technical Reports Server (NTRS)

    Palmer, W. F.; Mohler, R. J.

    1986-01-01

    SSG3 automatically estimates acreage of spring small grains from Landsat data. Report describes development and testing of a computerized technique for using Landsat multispectral scanner (MSS) data to estimate acreage of spring small grains (wheat, barley, and oats). Application of technique to analysis of four years of data from United States and Canada yielded estimates of accuracy comparable to those obtained through procedures that rely on trained analysis.

  2. An Automated Technique for Estimating Daily Precipitation over the State of Virginia

    NASA Technical Reports Server (NTRS)

    Follansbee, W. A.; Chamberlain, L. W., III

    1981-01-01

    Digital IR and visible imagery obtained from a geostationary satellite located over the equator at 75 deg west latitude were provided by NASA and used to obtain a linear relationship between cloud top temperature and hourly precipitation. Two computer programs written in FORTRAN were used. The first program computes the satellite estimate field from the hourly digital IR imagery. The second program computes the final estimate for the entire state area by comparing five preliminary estimates of 24 hour precipitation with control raingage readings and determining which of the five methods gives the best estimate for the day. The final estimate is then produced by incorporating control gage readings into the winning method. In presenting reliable precipitation estimates for every cell in Virginia in near real time on a daily on going basis, the techniques require on the order of 125 to 150 daily gage readings by dependable, highly motivated observers distributed as uniformly as feasible across the state.

  3. Dominant root locus in state estimator design for material flow processes: A case study of hot strip rolling.

    PubMed

    Fišer, Jaromír; Zítek, Pavel; Skopec, Pavel; Knobloch, Jan; Vyhlídal, Tomáš

    2017-05-01

    The purpose of the paper is to achieve a constrained estimation of process state variables using the anisochronic state observer tuned by the dominant root locus technique. The anisochronic state observer is based on the state-space time delay model of the process. Moreover the process model is identified not only as delayed but also as non-linear. This model is developed to describe a material flow process. The root locus technique combined with the magnitude optimum method is utilized to investigate the estimation process. Resulting dominant roots location serves as a measure of estimation process performance. The higher the dominant (natural) frequency in the leftmost position of the complex plane the more enhanced performance with good robustness is achieved. Also the model based observer control methodology for material flow processes is provided by means of the separation principle. For demonstration purposes, the computer-based anisochronic state observer is applied to the strip temperatures estimation in the hot strip finishing mill composed of seven stands. This application was the original motivation to the presented research. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  5. Event-Based $H_\\infty $ State Estimation for Time-Varying Stochastic Dynamical Networks With State- and Disturbance-Dependent Noises.

    PubMed

    Sheng, Li; Wang, Zidong; Zou, Lei; Alsaadi, Fuad E

    2017-10-01

    In this paper, the event-based finite-horizon H ∞ state estimation problem is investigated for a class of discrete time-varying stochastic dynamical networks with state- and disturbance-dependent noises [also called (x,v) -dependent noises]. An event-triggered scheme is proposed to decrease the frequency of the data transmission between the sensors and the estimator, where the signal is transmitted only when certain conditions are satisfied. The purpose of the problem addressed is to design a time-varying state estimator in order to estimate the network states through available output measurements. By employing the completing-the-square technique and the stochastic analysis approach, sufficient conditions are established to ensure that the error dynamics of the state estimation satisfies a prescribed H ∞ performance constraint over a finite horizon. The desired estimator parameters can be designed via solving coupled backward recursive Riccati difference equations. Finally, a numerical example is exploited to demonstrate the effectiveness of the developed state estimation scheme.

  6. Using satellite image-based maps and ground inventory data to estimate the area of the remaining Atlantic forest in the Brazilian state of Santa Catarina

    Treesearch

    Alexander C. Vibrans; Ronald E. McRoberts; Paolo Moser; Adilson L. Nicoletti

    2013-01-01

    Estimation of large area forest attributes, such as area of forest cover, from remote sensing-based maps is challenging because of image processing, logistical, and data acquisition constraints. In addition, techniques for estimating and compensating for misclassification and estimating uncertainty are often unfamiliar. Forest area for the state of Santa Catarina in...

  7. 31 CFR 205.24 - How are accurate estimates maintained?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... EFFICIENT FEDERAL-STATE FUNDS TRANSFERS Rules Applicable to Federal Assistance Programs Included in a... funding technique provisions in the Treasury-State agreement or take other mutually agreed upon corrective... funds to be transferred under the Federal assistance program or program component to which an estimate...

  8. Series resistance compensation for whole-cell patch-clamp studies using a membrane state estimator

    PubMed Central

    Sherman, AJ; Shrier, A; Cooper, E

    1999-01-01

    Whole-cell patch-clamp techniques are widely used to measure membrane currents from isolated cells. While suitable for a broad range of ionic currents, the series resistance (R(s)) of the recording pipette limits the bandwidth of the whole-cell configuration, making it difficult to measure rapid ionic currents. To increase bandwidth, it is necessary to compensate for R(s). Most methods of R(s) compensation become unstable at high bandwidth, making them hard to use. We describe a novel method of R(s) compensation that overcomes the stability limitations of standard designs. This method uses a state estimator, implemented with analog computation, to compute the membrane potential, V(m), which is then used in a feedback loop to implement a voltage clamp; we refer to this as state estimator R(s) compensation. To demonstrate the utility of this approach, we built an amplifier incorporating state estimator R(s) compensation. In benchtop tests, our amplifier showed significantly higher bandwidths and improved stability when compared with a commercially available amplifier. We demonstrated that state estimator R(s) compensation works well in practice by recording voltage-gated Na(+) currents under voltage-clamp conditions from dissociated neonatal rat sympathetic neurons. We conclude that state estimator R(s) compensation should make it easier to measure large rapid ionic currents with whole-cell patch-clamp techniques. PMID:10545359

  9. Longitudinal Factor Score Estimation Using the Kalman Filter.

    ERIC Educational Resources Information Center

    Oud, Johan H.; And Others

    1990-01-01

    How longitudinal factor score estimation--the estimation of the evolution of factor scores for individual examinees over time--can profit from the Kalman filter technique is described. The Kalman estimates change more cautiously over time, have lower estimation error variances, and reproduce the LISREL program latent state correlations more…

  10. On-Board Real-Time State and Fault Identification for Rovers

    NASA Technical Reports Server (NTRS)

    Washington, Richard

    2000-01-01

    For extended autonomous operation, rovers must identify potential faults to determine whether its execution needs to be halted or not. At the same time, rovers present particular challenges for state estimation techniques: they are subject to environmental influences that affect senior readings during normal and anomalous operation, and the sensors fluctuate rapidly both because of noise and because of the dynamics of the rover's interaction with its environment. This paper presents MAKSI, an on-board method for state estimation and fault diagnosis that is particularly appropriate for rovers. The method is based on a combination of continuous state estimation, wing Kalman filters, and discrete state estimation, wing a Markov-model representation.

  11. Optimal estimation of parameters and states in stochastic time-varying systems with time delay

    NASA Astrophysics Data System (ADS)

    Torkamani, Shahab; Butcher, Eric A.

    2013-08-01

    In this study estimation of parameters and states in stochastic linear and nonlinear delay differential systems with time-varying coefficients and constant delay is explored. The approach consists of first employing a continuous time approximation to approximate the stochastic delay differential equation with a set of stochastic ordinary differential equations. Then the problem of parameter estimation in the resulting stochastic differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the resulting system, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states.

  12. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    NASA Astrophysics Data System (ADS)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  13. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  14. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  15. Parameter estimating state reconstruction

    NASA Technical Reports Server (NTRS)

    George, E. B.

    1976-01-01

    Parameter estimation is considered for systems whose entire state cannot be measured. Linear observers are designed to recover the unmeasured states to a sufficient accuracy to permit the estimation process. There are three distinct dynamics that must be accommodated in the system design: the dynamics of the plant, the dynamics of the observer, and the system updating of the parameter estimation. The latter two are designed to minimize interaction of the involved systems. These techniques are extended to weakly nonlinear systems. The application to a simulation of a space shuttle POGO system test is of particular interest. A nonlinear simulation of the system is developed, observers designed, and the parameters estimated.

  16. A Comparison of Two Above-Ground Biomass Estimation Techniques Integrating Satellite-Based Remotely Sensed Data and Ground Data for Tropical and Semiarid Forests in Puerto Rico

    EPA Science Inventory

    Two above-ground forest biomass estimation techniques were evaluated for the United States Territory of Puerto Rico using predictor variables acquired from satellite based remotely sensed data and ground data from the U.S. Department of Agriculture Forest Inventory Analysis (FIA)...

  17. Kalman filter data assimilation: targeting observations and parameter estimation.

    PubMed

    Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

    2014-06-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  18. Kalman filter data assimilation: Targeting observations and parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex

    2014-06-15

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly locatedmore » observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.« less

  19. Study of synthesis techniques for insensitive aircraft control systems

    NASA Technical Reports Server (NTRS)

    Harvey, C. A.; Pope, R. E.

    1977-01-01

    Insensitive flight control system design criteria was defined in terms of maximizing performance (handling qualities, RMS gust response, transient response, stability margins) over a defined parameter range. Wing load alleviation for the C-5A was chosen as a design problem. The C-5A model was a 79-state, two-control structure with uncertainties assumed to exist in dynamic pressure, structural damping and frequency, and the stability derivative, M sub w. Five new techniques (mismatch estimation, uncertainty weighting, finite dimensional inverse, maximum difficulty, dual Lyapunov) were developed. Six existing techniques (additive noise, minimax, multiplant, sensitivity vector augmentation, state dependent noise, residualization) and the mismatch estimation and uncertainty weighting techniques were synthesized and evaluated on the design example. Evaluation and comparison of these six techniques indicated that the minimax and the uncertainty weighting techniques were superior to the other six, and of these two, uncertainty weighting has lower computational requirements. Techniques based on the three remaining new concepts appear promising and are recommended for further research.

  20. National scale biomass estimators for United States tree species

    Treesearch

    Jennifer C. Jenkins; David C. Chojnacky; Linda S. Heath; Richard A. Birdsey

    2003-01-01

    Estimates of national-scale forest carbon (C) stocks and fluxes are typically based on allometric regression equations developed using dimensional analysis techniques. However, the literature is inconsistent and incomplete with respect to large-scale forest C estimation. We compiled all available diameter-based allometric regression equations for estimating total...

  1. Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation

    NASA Technical Reports Server (NTRS)

    Rakoczy, John M.; Herren, Kenneth A.

    2008-01-01

    A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.

  2. Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation

    NASA Technical Reports Server (NTRS)

    Rakoczy, John; Herren, Kenneth

    2007-01-01

    A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.

  3. Estimation and identification study for flexible vehicles

    NASA Technical Reports Server (NTRS)

    Jazwinski, A. H.; Englar, T. S., Jr.

    1973-01-01

    Techniques are studied for the estimation of rigid body and bending states and the identification of model parameters associated with the single-axis attitude dynamics of a flexible vehicle. This problem is highly nonlinear but completely observable provided sufficient attitude and attitude rate data is available and provided all system bending modes are excited in the observation interval. A sequential estimator tracks the system states in the presence of model parameter errors. A batch estimator identifies all model parameters with high accuracy.

  4. Congestion estimation technique in the optical network unit registration process.

    PubMed

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  5. Software for the grouped optimal aggregation technique

    NASA Technical Reports Server (NTRS)

    Brown, P. M.; Shaw, G. W. (Principal Investigator)

    1982-01-01

    The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.

  6. Event triggered state estimation techniques for power systems with integrated variable energy resources.

    PubMed

    Francy, Reshma C; Farid, Amro M; Youcef-Toumi, Kamal

    2015-05-01

    For many decades, state estimation (SE) has been a critical technology for energy management systems utilized by power system operators. Over time, it has become a mature technology that provides an accurate representation of system state under fairly stable and well understood system operation. The integration of variable energy resources (VERs) such as wind and solar generation, however, introduces new fast frequency dynamics and uncertainties into the system. Furthermore, such renewable energy is often integrated into the distribution system thus requiring real-time monitoring all the way to the periphery of the power grid topology and not just the (central) transmission system. The conventional solution is two fold: solve the SE problem (1) at a faster rate in accordance with the newly added VER dynamics and (2) for the entire power grid topology including the transmission and distribution systems. Such an approach results in exponentially growing problem sets which need to be solver at faster rates. This work seeks to address these two simultaneous requirements and builds upon two recent SE methods which incorporate event-triggering such that the state estimator is only called in the case of considerable novelty in the evolution of the system state. The first method incorporates only event-triggering while the second adds the concept of tracking. Both SE methods are demonstrated on the standard IEEE 14-bus system and the results are observed for a specific bus for two difference scenarios: (1) a spike in the wind power injection and (2) ramp events with higher variability. Relative to traditional state estimation, the numerical case studies showed that the proposed methods can result in computational time reductions of 90%. These results were supported by a theoretical discussion of the computational complexity of three SE techniques. The work concludes that the proposed SE techniques demonstrate practical improvements to the computational complexity of classical state estimation. In such a way, state estimation can continue to support the necessary control actions to mitigate the imbalances resulting from the uncertainties in renewables. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Regional distribution of forest height and biomass from multisensor data fusion

    Treesearch

    Yifan Yu; Sassan Saatch; Linda S. Heath; Elizabeth LaPoint; Ranga Myneni; Yuri Knyazikhin

    2010-01-01

    Elevation data acquired from radar interferometry at C-band from SRTM are used in data fusion techniques to estimate regional scale forest height and aboveground live biomass (AGLB) over the state of Maine. Two fusion techniques have been developed to perform post-processing and parameter estimations from four data sets: 1 arc sec National Elevation Data (NED), SRTM...

  8. A regularized auxiliary particle filtering approach for system state estimation and battery life prediction

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wang, Wilson; Ma, Fai

    2011-07-01

    System current state estimation (or condition monitoring) and future state prediction (or failure prognostics) constitute the core elements of condition-based maintenance programs. For complex systems whose internal state variables are either inaccessible to sensors or hard to measure under normal operational conditions, inference has to be made from indirect measurements using approaches such as Bayesian learning. In recent years, the auxiliary particle filter (APF) has gained popularity in Bayesian state estimation; the APF technique, however, has some potential limitations in real-world applications. For example, the diversity of the particles may deteriorate when the process noise is small, and the variance of the importance weights could become extremely large when the likelihood varies dramatically over the prior. To tackle these problems, a regularized auxiliary particle filter (RAPF) is developed in this paper for system state estimation and forecasting. This RAPF aims to improve the performance of the APF through two innovative steps: (1) regularize the approximating empirical density and redraw samples from a continuous distribution so as to diversify the particles; and (2) smooth out the rather diffused proposals by a rejection/resampling approach so as to improve the robustness of particle filtering. The effectiveness of the proposed RAPF technique is evaluated through simulations of a nonlinear/non-Gaussian benchmark model for state estimation. It is also implemented for a real application in the remaining useful life (RUL) prediction of lithium-ion batteries.

  9. Maximizing mitigation benefits : project summary.

    DOT National Transportation Integrated Search

    2016-04-30

    The research team: : - Reviewed methods, techniques, and : processes at select state DOTs for estimating : mitigations costs for the following states: : Arizona, California, Colorado, Florida, New : York, North Carolina, Ohio, Oregon, : Pennsylvania,...

  10. DUAL STATE-PARAMETER UPDATING SCHEME ON A CONCEPTUAL HYDROLOGIC MODEL USING SEQUENTIAL MONTE CARLO FILTERS

    NASA Astrophysics Data System (ADS)

    Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin

    Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.

  11. State and parameter estimation of spatiotemporally chaotic systems illustrated by an application to Rayleigh-Bénard convection.

    PubMed

    Cornick, Matthew; Hunt, Brian; Ott, Edward; Kurtuldu, Huseyin; Schatz, Michael F

    2009-03-01

    Data assimilation refers to the process of estimating a system's state from a time series of measurements (which may be noisy or incomplete) in conjunction with a model for the system's time evolution. Here we demonstrate the applicability of a recently developed data assimilation method, the local ensemble transform Kalman filter, to nonlinear, high-dimensional, spatiotemporally chaotic flows in Rayleigh-Bénard convection experiments. Using this technique we are able to extract the full temperature and velocity fields from a time series of shadowgraph measurements. In addition, we describe extensions of the algorithm for estimating model parameters. Our results suggest the potential usefulness of our data assimilation technique to a broad class of experimental situations exhibiting spatiotemporal chaos.

  12. Online technique for detecting state of onboard fiber optic gyroscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, Zhiyong; He, Kunpeng, E-mail: pengkhe@126.com; Pang, Shuwan

    2015-02-15

    Although angle random walk (ARW) of fiber optic gyroscope (FOG) has been well modeled and identified before being integrated into the high-accuracy attitude control system of satellite, aging and unexpected failures can affect the performance of FOG after launch, resulting in the variation of ARW coefficient. Therefore, the ARW coefficient can be regarded as an indicator of “state of health” for FOG diagnosis in some sense. The Allan variance method can be used to estimate ARW coefficient of FOG, however, it requires a large amount of data to be stored. Moreover, the procedure of drawing slope lines for estimation ismore » painful. To overcome the barriers, a weighted state-space model that directly models the ARW to obtain a nonlinear state-space model was established for FOG. Then, a neural extended-Kalman filter algorithm was implemented to estimate and track the variation of ARW in real time. The results of experiment show that the proposed approach is valid to detect the state of FOG. Moreover, the proposed technique effectively avoids the storage of data.« less

  13. Efficient tomography of a quantum many-body system

    NASA Astrophysics Data System (ADS)

    Lanyon, B. P.; Maier, C.; Holzäpfel, M.; Baumgratz, T.; Hempel, C.; Jurcevic, P.; Dhand, I.; Buyskikh, A. S.; Daley, A. J.; Cramer, M.; Plenio, M. B.; Blatt, R.; Roos, C. F.

    2017-12-01

    Quantum state tomography is the standard technique for estimating the quantum state of small systems. But its application to larger systems soon becomes impractical as the required resources scale exponentially with the size. Therefore, considerable effort is dedicated to the development of new characterization tools for quantum many-body states. Here we demonstrate matrix product state tomography, which is theoretically proven to allow for the efficient and accurate estimation of a broad class of quantum states. We use this technique to reconstruct the dynamical state of a trapped-ion quantum simulator comprising up to 14 entangled and individually controlled spins: a size far beyond the practical limits of quantum state tomography. Our results reveal the dynamical growth of entanglement and describe its complexity as correlations spread out during a quench: a necessary condition for future demonstrations of better-than-classical performance. Matrix product state tomography should therefore find widespread use in the study of large quantum many-body systems and the benchmarking and verification of quantum simulators and computers.

  14. A Method for Improving Temporal and Spatial Resolution of Carbon Dioxide Emissions

    NASA Astrophysics Data System (ADS)

    Gregg, J. S.; Andres, R. J.

    2003-12-01

    Using United States data, a method is developed to estimate the monthly consumption of solid, liquid and gaseous fossil fuels for each state in the union. This technique employs monthly sales data to estimate the relative monthly proportions of the total annual national fossil fuel use. These proportions are then used to estimate the total monthly carbon dioxide emissions for each state. To assess the success of this technique, the results from this method are compared with the data obtained from other independent methods. To determine the temporal success of the method, the resulting national time series is compared to the model produced by Carbon Dioxide Information Analysis Center (CDIAC) and the current model being developed by T. J. Blasing and C. Broniak at the Oak Ridge National Laboratory (ORNL). The University of North Dakota (UND) method fits well temporally with the results of the CDIAC and current ORNL research. To determine the success of the spatial component, the individual state results are compared to the annual state totals calculated by ORNL. Using ordinary least squares regression, the annual state totals of this method are plotted against the ORNL data. This allows a direct comparison of estimates in the form of ordered pairs against a one-to-one ideal correspondence line, and allows for easy detection of outliers in the results obtained by this estimation method. Analyzing the residuals of the linear regression model for each type of fuel permits an improved understanding of the strengths and shortcomings of the spatial component of this estimation technique. Spatially, the model is successful when compared to the current ORNL research. The primary advantages of this method are its ease of implementation and universal applicability. In general, this technique compares favorably to more labor-intensive methods that rely on more detailed data. The more detailed data is generally not available for most countries in the world. The methodology used here will be applied to other nations in the world to better understand their sub-annual cycle and sub-national spatial distribution of carbon dioxide emissions from fossil fuel consumption. Better understanding of the cycle will lead to better models used for predicting and responding to global environmental changes currently observed and anticipated.

  15. Accuracy of selected techniques for estimating ice-affected streamflow

    USGS Publications Warehouse

    Walker, John F.

    1991-01-01

    This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.

  16. Space shuttle propulsion parameter estimation using optimal estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.

  17. Graph theoretic framework based cooperative control and estimation of multiple UAVs for target tracking

    NASA Astrophysics Data System (ADS)

    Ahmed, Mousumi

    Designing the control technique for nonlinear dynamic systems is a significant challenge. Approaches to designing a nonlinear controller are studied and an extensive study on backstepping based technique is performed in this research with the purpose of tracking a moving target autonomously. Our main motivation is to explore the controller for cooperative and coordinating unmanned vehicles in a target tracking application. To start with, a general theoretical framework for target tracking is studied and a controller in three dimensional environment for a single UAV is designed. This research is primarily focused on finding a generalized method which can be applied to track almost any reference trajectory. The backstepping technique is employed to derive the controller for a simplified UAV kinematic model. This controller can compute three autopilot modes i.e. velocity, ground heading (or course angle), and flight path angle for tracking the unmanned vehicle. Numerical implementation is performed in MATLAB with the assumption of having perfect and full state information of the target to investigate the accuracy of the proposed controller. This controller is then frozen for the multi-vehicle problem. Distributed or decentralized cooperative control is discussed in the context of multi-agent systems. A consensus based cooperative control is studied; such consensus based control problem can be viewed from the algebraic graph theory concepts. The communication structure between the UAVs is represented by the dynamic graph where UAVs are represented by the nodes and the communication links are represented by the edges. The previously designed controller is augmented to account for the group to obtain consensus based on their communication. A theoretical development of the controller for the cooperative group of UAVs is presented and the simulation results for different communication topologies are shown. This research also investigates the cases where the communication topology switches to a different topology over particular time instants. Lyapunov analysis is performed to show stability in all cases. Another important aspect of this dissertation research is to implement the controller for the case, where perfect or full state information is not available. This necessitates the design of an estimator to estimate the system state. A nonlinear estimator, Extended Kalman Filter (EKF) is first developed for target tracking with a single UAV. The uncertainties involved with the measurement model and dynamics model are considered as zero mean Gaussian noises with some known covariances. The measurements of the full state of the target are not available and only the range, elevation, and azimuth angle are available from an onboard seeker sensor. A separate EKF is designed to estimate the UAV's own state where the state measurement is available through on-board sensors. The controller computes the three control commands based on the estimated states of target and its own states. Estimation based control laws is also implemented for colored noise measurement uncertainties, and the controller performance is shown with the simulation results. The estimation based control approach is then extended for the cooperative target tracking case. The target information is available to the network and a separate estimator is used to estimate target states. All of the UAVs in the network apply the same control law and the only difference is that each UAV updates the commands according to their connection. The simulation is performed for both cases of fixed and time varying communication topology. Monte Carlo simulation is also performed with different sample noises to investigate the performance of the estimator. The proposed technique is shown to be simple and robust to noisy environments.

  18. State-Space Modeling of Dynamic Psychological Processes via the Kalman Smoother Algorithm: Rationale, Finite Sample Properties, and Applications

    ERIC Educational Resources Information Center

    Song, Hairong; Ferrer, Emilio

    2009-01-01

    This article presents a state-space modeling (SSM) technique for fitting process factor analysis models directly to raw data. The Kalman smoother via the expectation-maximization algorithm to obtain maximum likelihood parameter estimates is used. To examine the finite sample properties of the estimates in SSM when common factors are involved, a…

  19. A switched systems approach to image-based estimation

    NASA Astrophysics Data System (ADS)

    Parikh, Anup

    With the advent of technological improvements in imaging systems and computational resources, as well as the development of image-based reconstruction techniques, it is necessary to understand algorithm performance when subject to real world conditions. Specifically, this dissertation focuses on the stability and performance of a class of image-based observers in the presence of intermittent measurements, caused by e.g., occlusions, limited FOV, feature tracking losses, communication losses, or finite frame rates. Observers or filters that are exponentially stable under persistent observability may have unbounded error growth during intermittent sensing, even while providing seemingly accurate state estimates. In Chapter 3, dwell time conditions are developed to guarantee state estimation error convergence to an ultimate bound for a class of observers while undergoing measurement loss. Bounds are developed on the unstable growth of the estimation errors during the periods when the object being tracked is not visible. A Lyapunov-based analysis for the switched system is performed to develop an inequality in terms of the duration of time the observer can view the moving object and the duration of time the object is out of the field of view. In Chapter 4, a motion model is used to predict the evolution of the states of the system while the object is not visible. This reduces the growth rate of the bounding function to an exponential and enables the use of traditional switched systems Lyapunov analysis techniques. The stability analysis results in an average dwell time condition to guarantee state error convergence with a known decay rate. In comparison with the results in Chapter 3, the estimation errors converge to zero rather than a ball, with relaxed switching conditions, at the cost of requiring additional information about the motion of the feature. In some applications, a motion model of the object may not be available. Numerous adaptive techniques have been developed to compensate for unknown parameters or functions in system dynamics; however, persistent excitation (PE) conditions are typically required to ensure parameter convergence, i.e., learning. Since the motion model is needed in the predictor, model learning is desired; however, PE is difficult to insure a priori and infeasible to check online for nonlinear systems. Concurrent learning (CL) techniques have been developed to use recorded data and a relaxed excitation condition to ensure convergence. In CL, excitation is only required for a finite period of time, and the recorded data can be checked to determine if it is sufficiently rich. However, traditional CL requires knowledge of state derivatives, which are typically not measured and require extensive filter design and tuning to develop satisfactory estimates. In Chapter 5 of this dissertation, a novel formulation of CL is developed in terms of an integral (ICL), removing the need to estimate state derivatives while preserving parameter convergence properties. Using ICL, an estimator is developed in Chapter 6 for simultaneously estimating the pose of an object as well as learning a model of its motion for use in a predictor when the object is not visible. A switched systems analysis is provided to demonstrate the stability of the estimation and prediction with learning scheme. Dwell time conditions as well as excitation conditions are developed to ensure estimation errors converge to an arbitrarily small bound. Experimental results are provided to illustrate the performance of each of the developed estimation schemes. The dissertation concludes with a discussion of the contributions and limitations of the developed techniques, as well as avenues for future extensions.

  20. Male-Female Wage Differentials in the United States.

    ERIC Educational Resources Information Center

    Kiker, B. F.; Crouch, Henry L.

    The primary objective of this paper is to describe a method of estimating female-male wage ratios. The estimating technique presented is two stage least squares (2SLS), in which equations are estimated for both men and women. After specifying and estimating the wage equations, the male-female wage differential is calculated that would remain if…

  1. Space Shuttle propulsion parameter estimation using optimal estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The fifth monthly progress report includes corrections and additions to the previously submitted reports. The addition of the SRB propellant thickness as a state variable is included with the associated partial derivatives. During this reporting period, preliminary results of the estimation program checkout was presented to NASA technical personnel.

  2. Biomagnetic techniques for evaluating gastric emptying, peristaltic contraction and transit time

    PubMed Central

    la Roca-Chiapas, Jose María De; Cordova-Fraga, Teodoro

    2011-01-01

    Biomagnetic techniques were used to measure motility in various parts of the gastrointestinal (GI) tract, particularly a new technique for detecting magnetic markers and tracers. A coil was used to enhance the signal from a magnetic tracer in the GI tract and the signal was detected using a fluxgate magnetometer or a magnetoresistor in an unshielded room. Estimates of esophageal transit time were affected by the position of the subject. The reproducibility of estimates derived using the new biomagnetic technique was greater than 85% and it yielded estimates similar to those obtained using scintigraphy. This technique is suitable for studying the effect of emotional state on GI physiology and for measuring GI transit time. The biomagnetic technique can be used to evaluate digesta transit time in the esophagus, stomach and colon, peristaltic frequency and gastric emptying and is easy to use in the hospital setting. PMID:22025978

  3. Biomagnetic techniques for evaluating gastric emptying, peristaltic contraction and transit time.

    PubMed

    la Roca-Chiapas, Jose María De; Cordova-Fraga, Teodoro

    2011-10-15

    Biomagnetic techniques were used to measure motility in various parts of the gastrointestinal (GI) tract, particularly a new technique for detecting magnetic markers and tracers. A coil was used to enhance the signal from a magnetic tracer in the GI tract and the signal was detected using a fluxgate magnetometer or a magnetoresistor in an unshielded room. Estimates of esophageal transit time were affected by the position of the subject. The reproducibility of estimates derived using the new biomagnetic technique was greater than 85% and it yielded estimates similar to those obtained using scintigraphy. This technique is suitable for studying the effect of emotional state on GI physiology and for measuring GI transit time. The biomagnetic technique can be used to evaluate digesta transit time in the esophagus, stomach and colon, peristaltic frequency and gastric emptying and is easy to use in the hospital setting.

  4. Imputing missing data via sparse reconstruction techniques.

    DOT National Transportation Integrated Search

    2017-06-01

    The State of Texas does not currently have an automated approach for estimating volumes for links without counts. This research project proposes the development of an automated system to efficiently estimate the traffic volumes on uncounted links, in...

  5. A Method for Determining Pseudo-measurement State Values for Topology Observability of State Estimation in Power Systems

    NASA Astrophysics Data System (ADS)

    Urano, Shoichi; Mori, Hiroyuki

    This paper proposes a new technique for determining of state values in power systems. Recently, it is useful for carrying out state estimation with data of PMU (Phasor Measurement Unit). The authors have developed a method for determining state values with artificial neural network (ANN) considering topology observability in power systems. ANN has advantage to approximate nonlinear functions with high precision. The method evaluates pseudo-measurement state values of the data which are lost in power systems. The method is successfully applied to the IEEE 14-bus system.

  6. Real-Time Radar-Based Tracking and State Estimation of Multiple Non-Conformant Aircraft

    NASA Technical Reports Server (NTRS)

    Cook, Brandon; Arnett, Timothy; Macmann, Owen; Kumar, Manish

    2017-01-01

    In this study, a novel solution for automated tracking of multiple unknown aircraft is proposed. Many current methods use transponders to self-report state information and augment track identification. While conformant aircraft typically report transponder information to alert surrounding aircraft of its state, vehicles may exist in the airspace that are non-compliant and need to be accurately tracked using alternative methods. In this study, a multi-agent tracking solution is presented that solely utilizes primary surveillance radar data to estimate aircraft state information. Main research challenges include state estimation, track management, data association, and establishing persistent track validity. In an effort to realize these challenges, techniques such as Maximum a Posteriori estimation, Kalman filtering, degree of membership data association, and Nearest Neighbor Spanning Tree clustering are implemented for this application.

  7. Estimation of Subjective Difficulty and Psychological Stress by Ambient Sensing of Desk Panel Vibrations

    NASA Astrophysics Data System (ADS)

    Hamaguchi, Nana; Yamamoto, Keiko; Iwai, Daisuke; Sato, Kosuke

    We investigate ambient sensing techniques that recognize writer's psychological states by measuring vibrations of handwriting on a desk panel using a piezoelectric contact sensor attached to its underside. In particular, we describe a technique for estimating the subjective difficulty of a question for a student as the ratio of the time duration of thinking to the total amount of time spent on the question. Through experiments, we confirm that our technique correctly recognizes whether or not a person writes something down on paper by measured vibration data at the accuracy of over 80 %, and that the order of computed subjective difficulties of three questions is coincident with that reported by the subject in 60 % of experiments. We also propose a technique to estimate a writer's psychological stress by using the standard deviation of the spectrum of the measured vibration. Results of a proof-of-concept experiment show that the proposed technique correctly estimates whether or not the subject feels stress at least 90 % of the time.

  8. Simple and accurate methods for quantifying deformation, disruption, and development in biological tissues

    PubMed Central

    Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros

    2014-01-01

    When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601

  9. Nonparametric identification of nonlinear dynamic systems using a synchronisation-based method

    NASA Astrophysics Data System (ADS)

    Kenderi, Gábor; Fidlin, Alexander

    2014-12-01

    The present study proposes an identification method for highly nonlinear mechanical systems that does not require a priori knowledge of the underlying nonlinearities to reconstruct arbitrary restoring force surfaces between degrees of freedom. This approach is based on the master-slave synchronisation between a dynamic model of the system as the slave and the real system as the master using measurements of the latter. As the model synchronises to the measurements, it becomes an observer of the real system. The optimal observer algorithm in a least-squares sense is given by the Kalman filter. Using the well-known state augmentation technique, the Kalman filter can be turned into a dual state and parameter estimator to identify parameters of a priori characterised nonlinearities. The paper proposes an extension of this technique towards nonparametric identification. A general system model is introduced by describing the restoring forces as bilateral spring-dampers with time-variant coefficients, which are estimated as augmented states. The estimation procedure is followed by an a posteriori statistical analysis to reconstruct noise-free restoring force characteristics using the estimated states and their estimated variances. Observability is provided using only one measured mechanical quantity per degree of freedom, which makes this approach less demanding in the number of necessary measurement signals compared with truly nonparametric solutions, which typically require displacement, velocity and acceleration signals. Additionally, due to the statistical rigour of the procedure, it successfully addresses signals corrupted by significant measurement noise. In the present paper, the method is described in detail, which is followed by numerical examples of one degree of freedom (1DoF) and 2DoF mechanical systems with strong nonlinearities of vibro-impact type to demonstrate the effectiveness of the proposed technique.

  10. Improved Battery State Estimation Using Novel Sensing Techniques

    NASA Astrophysics Data System (ADS)

    Abdul Samad, Nassim

    Lithium-ion batteries have been considered a great complement or substitute for gasoline engines due to their high energy and power density capabilities among other advantages. However, these types of energy storage devices are still yet not widespread, mainly because of their relatively high cost and safety issues, especially at elevated temperatures. This thesis extends existing methods of estimating critical battery states using model-based techniques augmented by real-time measurements from novel temperature and force sensors. Typically, temperature sensors are located near the edge of the battery, and away from the hottest core cell regions, which leads to slower response times and increased errors in the prediction of core temperatures. New sensor technology allows for flexible sensor placement at the cell surface between cells in a pack. This raises questions about the optimal locations of these sensors for best observability and temperature estimation. Using a validated model, which is developed and verified using experiments in laboratory fixtures that replicate vehicle pack conditions, it is shown that optimal sensor placement can lead to better and faster temperature estimation. Another equally important state is the state of health or the capacity fading of the cell. This thesis introduces a novel method of using force measurements for capacity fade estimation. Monitoring capacity is important for defining the range of electric vehicles (EVs) and plug-in hybrid electric vehicles (PHEVs). Current capacity estimation techniques require a full discharge to monitor capacity. The proposed method can complement or replace current methods because it only requires a shallow discharge, which is especially useful in EVs and PHEVs. Using the accurate state estimation accomplished earlier, a method for downsizing a battery pack is shown to effectively reduce the number of cells in a pack without compromising safety. The influence on the battery performance (e.g. temperature, utilization, capacity fade, and cost) while downsizing and shifting the nominal operating SOC is demonstrated via simulations. The contributions in this thesis aim to make EVs, HEVs and PHEVs less costly while maintaining safety and reliability as more people are transitioning towards more environmentally friendly means of transportation.

  11. Quantifying short-lived events in multistate ionic current measurements.

    PubMed

    Balijepalli, Arvind; Ettedgui, Jessica; Cornio, Andrew T; Robertson, Joseph W F; Cheung, Kin P; Kasianowicz, John J; Vaz, Canute

    2014-02-25

    We developed a generalized technique to characterize polymer-nanopore interactions via single channel ionic current measurements. Physical interactions between analytes, such as DNA, proteins, or synthetic polymers, and a nanopore cause multiple discrete states in the current. We modeled the transitions of the current to individual states with an equivalent electrical circuit, which allowed us to describe the system response. This enabled the estimation of short-lived states that are presently not characterized by existing analysis techniques. Our approach considerably improves the range and resolution of single-molecule characterization with nanopores. For example, we characterized the residence times of synthetic polymers that are three times shorter than those estimated with existing algorithms. Because the molecule's residence time follows an exponential distribution, we recover nearly 20-fold more events per unit time that can be used for analysis. Furthermore, the measurement range was extended from 11 monomers to as few as 8. Finally, we applied this technique to recover a known sequence of single-stranded DNA from previously published ion channel recordings, identifying discrete current states with subpicoampere resolution.

  12. Sequential state estimation of nonlinear/non-Gaussian systems with stochastic input for turbine degradation estimation

    NASA Astrophysics Data System (ADS)

    Hanachi, Houman; Liu, Jie; Banerjee, Avisekh; Chen, Ying

    2016-05-01

    Health state estimation of inaccessible components in complex systems necessitates effective state estimation techniques using the observable variables of the system. The task becomes much complicated when the system is nonlinear/non-Gaussian and it receives stochastic input. In this work, a novel sequential state estimation framework is developed based on particle filtering (PF) scheme for state estimation of general class of nonlinear dynamical systems with stochastic input. Performance of the developed framework is then validated with simulation on a Bivariate Non-stationary Growth Model (BNGM) as a benchmark. In the next step, three-year operating data of an industrial gas turbine engine (GTE) are utilized to verify the effectiveness of the developed framework. A comprehensive thermodynamic model for the GTE is therefore developed to formulate the relation of the observable parameters and the dominant degradation symptoms of the turbine, namely, loss of isentropic efficiency and increase of the mass flow. The results confirm the effectiveness of the developed framework for simultaneous estimation of multiple degradation symptoms in complex systems with noisy measured inputs.

  13. Critical review of on-board capacity estimation techniques for lithium-ion batteries in electric and hybrid electric vehicles

    NASA Astrophysics Data System (ADS)

    Farmann, Alexander; Waag, Wladislaw; Marongiu, Andrea; Sauer, Dirk Uwe

    2015-05-01

    This work provides an overview of available methods and algorithms for on-board capacity estimation of lithium-ion batteries. An accurate state estimation for battery management systems in electric vehicles and hybrid electric vehicles is becoming more essential due to the increasing attention paid to safety and lifetime issues. Different approaches for the estimation of State-of-Charge, State-of-Health and State-of-Function are discussed and analyzed by many authors and researchers in the past. On-board estimation of capacity in large lithium-ion battery packs is definitely one of the most crucial challenges of battery monitoring in the aforementioned vehicles. This is mostly due to high dynamic operation and conditions far from those used in laboratory environments as well as the large variation in aging behavior of each cell in the battery pack. Accurate capacity estimation allows an accurate driving range prediction and accurate calculation of a battery's maximum energy storage capability in a vehicle. At the same time it acts as an indicator for battery State-of-Health and Remaining Useful Lifetime estimation.

  14. The Model Parameter Estimation Experiment (MOPEX): Its structure, connection to other international initiatives and future directions

    USGS Publications Warehouse

    Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.

    2006-01-01

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.

  15. H∞ state estimation of stochastic memristor-based neural networks with time-varying delays.

    PubMed

    Bao, Haibo; Cao, Jinde; Kurths, Jürgen; Alsaedi, Ahmed; Ahmad, Bashir

    2018-03-01

    This paper addresses the problem of H ∞ state estimation for a class of stochastic memristor-based neural networks with time-varying delays. Under the framework of Filippov solution, the stochastic memristor-based neural networks are transformed into systems with interval parameters. The present paper is the first to investigate the H ∞ state estimation problem for continuous-time Itô-type stochastic memristor-based neural networks. By means of Lyapunov functionals and some stochastic technique, sufficient conditions are derived to ensure that the estimation error system is asymptotically stable in the mean square with a prescribed H ∞ performance. An explicit expression of the state estimator gain is given in terms of linear matrix inequalities (LMIs). Compared with other results, our results reduce control gain and control cost effectively. Finally, numerical simulations are provided to demonstrate the efficiency of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Flight control synthesis for flexible aircraft using Eigenspace assignment

    NASA Technical Reports Server (NTRS)

    Davidson, J. B.; Schmidt, D. K.

    1986-01-01

    The use of eigenspace assignment techniques to synthesize flight control systems for flexible aircraft is explored. Eigenspace assignment techniques are used to achieve a specified desired eigenspace, chosen to yield desirable system impulse residue magnitudes for selected system responses. Two of these are investigated. The first directly determines constant measurement feedback gains that will yield a close-loop system eigenspace close to a desired eigenspace. The second technique selects quadratic weighting matrices in a linear quadratic control synthesis that will asymptotically yield the close-loop achievable eigenspace. Finally, the possibility of using either of these techniques with state estimation is explored. Application of the methods to synthesize integrated flight-control and structural-mode-control laws for a large flexible aircraft is demonstrated and results discussed. Eigenspace selection criteria based on design goals are discussed, and for the study case it would appear that a desirable eigenspace can be obtained. In addition, the importance of state-space selection is noted along with problems with reduced-order measurement feedback. Since the full-state control laws may be implemented with dynamic compensation (state estimation), the use of reduced-order measurement feedback is less desirable. This is especially true since no change in the transient response from the pilot's input results if state estimation is used appropriately. The potential is also noted for high actuator bandwidth requirements if the linear quadratic synthesis approach is utilized. Even with the actuator pole location selected, a problem with unmodeled modes is noted due to high bandwidth. Some suggestions for future research include investigating how to choose an eigenspace that will achieve certain desired dynamics and stability robustness, determining how the choice of measurements effects synthesis results, and exploring how the phase relationships between desired eigenvector elements effects the synthesis results.

  17. Isotopic Techniques for Assessment of Groundwater Discharge to the Coastal Ocean

    DTIC Science & Technology

    2003-09-30

    estimates of the pore water Rn activity. The red line (based on an average groundwater concentration of 170 dpm/L) is considered our best estimate and...Isotopic Techniques For Assessment of Groundwater Discharge to the Coastal Ocean William C. Burnett Department of Oceanography Florida State...evaluating the influence of submarine groundwater discharge (SGD) into the ocean. Our long-term goal is to develop geochemical tools (e.g., radon and

  18. Comparison of different estimation techniques for biomass concentration in large scale yeast fermentation.

    PubMed

    Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U

    2011-04-01

    In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Techniques for estimating flood-peak discharges from urban basins in Missouri

    USGS Publications Warehouse

    Becker, L.D.

    1986-01-01

    Techniques are defined for estimating the magnitude and frequency of future flood peak discharges of rainfall-induced runoff from small urban basins in Missouri. These techniques were developed from an initial analysis of flood records of 96 gaged sites in Missouri and adjacent states. Final regression equations are based on a balanced, representative sampling of 37 gaged sites in Missouri. This sample included 9 statewide urban study sites, 18 urban sites in St. Louis County, and 10 predominantly rural sites statewide. Short-term records were extended on the basis of long-term climatic records and use of a rainfall-runoff model. Linear least-squares regression analyses were used with log-transformed variables to relate flood magnitudes of selected recurrence intervals (dependent variables) to selected drainage basin indexes (independent variables). For gaged urban study sites within the State, the flood peak estimates are from the frequency curves defined from the synthesized long-term discharge records. Flood frequency estimates are made for ungaged sites by using regression equations that require determination of the drainage basin size and either the percentage of impervious area or a basin development factor. Alternative sets of equations are given for the 2-, 5-, 10-, 25-, 50-, and 100-yr recurrence interval floods. The average standard errors of estimate range from about 33% for the 2-yr flood to 26% for the 100-yr flood. The techniques for estimation are applicable to flood flows that are not significantly affected by storage caused by manmade activities. Flood peak discharge estimating equations are considered applicable for sites on basins draining approximately 0.25 to 40 sq mi. (Author 's abstract)

  20. Efficient Ensemble State-Parameters Estimation Techniques in Ocean Ecosystem Models: Application to the North Atlantic

    NASA Astrophysics Data System (ADS)

    El Gharamti, M.; Bethke, I.; Tjiputra, J.; Bertino, L.

    2016-02-01

    Given the recent strong international focus on developing new data assimilation systems for biological models, we present in this comparative study the application of newly developed state-parameters estimation tools to an ocean ecosystem model. It is quite known that the available physical models are still too simple compared to the complexity of the ocean biology. Furthermore, various biological parameters remain poorly unknown and hence wrong specifications of such parameters can lead to large model errors. Standard joint state-parameters augmentation technique using the ensemble Kalman filter (Stochastic EnKF) has been extensively tested in many geophysical applications. Some of these assimilation studies reported that jointly updating the state and the parameters might introduce significant inconsistency especially for strongly nonlinear models. This is usually the case for ecosystem models particularly during the period of the spring bloom. A better handling of the estimation problem is often carried out by separating the update of the state and the parameters using the so-called Dual EnKF. The dual filter is computationally more expensive than the Joint EnKF but is expected to perform more accurately. Using a similar separation strategy, we propose a new EnKF estimation algorithm in which we apply a one-step-ahead smoothing to the state. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. Unlike the classical filtering path, the new scheme starts with an update step and later a model propagation step is performed. We test the performance of the new smoothing-based schemes against the standard EnKF in a one-dimensional configuration of the Norwegian Earth System Model (NorESM) in the North Atlantic. We use nutrients profile (up to 2000 m deep) data and surface partial CO2 measurements from Mike weather station (66o N, 2o E) to estimate different biological parameters of phytoplanktons and zooplanktons. We analyze the performance of the filters in terms of complexity and accuracy of the state and parameters estimates.

  1. A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors.

    PubMed

    Song, Yu; Nuske, Stephen; Scherer, Sebastian

    2016-12-22

    State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.

  2. Optimal post-experiment estimation of poorly modeled dynamic systems

    NASA Technical Reports Server (NTRS)

    Mook, D. Joseph

    1988-01-01

    Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.

  3. Influence of the optimization methods on neural state estimation quality of the drive system with elasticity.

    PubMed

    Orlowska-Kowalska, Teresa; Kaminski, Marcin

    2014-01-01

    The paper deals with the implementation of optimized neural networks (NNs) for state variable estimation of the drive system with an elastic joint. The signals estimated by NNs are used in the control structure with a state-space controller and additional feedbacks from the shaft torque and the load speed. High estimation quality is very important for the correct operation of a closed-loop system. The precision of state variables estimation depends on the generalization properties of NNs. A short review of optimization methods of the NN is presented. Two techniques typical for regularization and pruning methods are described and tested in detail: the Bayesian regularization and the Optimal Brain Damage methods. Simulation results show good precision of both optimized neural estimators for a wide range of changes of the load speed and the load torque, not only for nominal but also changed parameters of the drive system. The simulation results are verified in a laboratory setup.

  4. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.

  5. Low-complexity DOA estimation from short data snapshots for ULA systems using the annihilating filter technique

    NASA Astrophysics Data System (ADS)

    Bellili, Faouzi; Amor, Souheib Ben; Affes, Sofiène; Ghrayeb, Ali

    2017-12-01

    This paper addresses the problem of DOA estimation using uniform linear array (ULA) antenna configurations. We propose a new low-cost method of multiple DOA estimation from very short data snapshots. The new estimator is based on the annihilating filter (AF) technique. It is non-data-aided (NDA) and does not impinge therefore on the whole throughput of the system. The noise components are assumed temporally and spatially white across the receiving antenna elements. The transmitted signals are also temporally and spatially white across the transmitting sources. The new method is compared in performance to the Cramér-Rao lower bound (CRLB), the root-MUSIC algorithm, the deterministic maximum likelihood estimator and another Bayesian method developed precisely for the single snapshot case. Simulations show that the new estimator performs well over a wide SNR range. Prominently, the main advantage of the new AF-based method is that it succeeds in accurately estimating the DOAs from short data snapshots and even from a single snapshot outperforming by far the state-of-the-art techniques both in DOA estimation accuracy and computational cost.

  6. Efficient Approaches for Propagating Hydrologic Forcing Uncertainty: High-Resolution Applications Over the Western United States

    NASA Astrophysics Data System (ADS)

    Hobbs, J.; Turmon, M.; David, C. H.; Reager, J. T., II; Famiglietti, J. S.

    2017-12-01

    NASA's Western States Water Mission (WSWM) combines remote sensing of the terrestrial water cycle with hydrological models to provide high-resolution state estimates for multiple variables. The effort includes both land surface and river routing models that are subject to several sources of uncertainty, including errors in the model forcing and model structural uncertainty. Computational and storage constraints prohibit extensive ensemble simulations, so this work outlines efficient but flexible approaches for estimating and reporting uncertainty. Calibrated by remote sensing and in situ data where available, we illustrate the application of these techniques in producing state estimates with associated uncertainties at kilometer-scale resolution for key variables such as soil moisture, groundwater, and streamflow.

  7. Nonlinear regression method for estimating neutral wind and temperature from Fabry-Perot interferometer data.

    PubMed

    Harding, Brian J; Gehrels, Thomas W; Makela, Jonathan J

    2014-02-01

    The Earth's thermosphere plays a critical role in driving electrodynamic processes in the ionosphere and in transferring solar energy to the atmosphere, yet measurements of thermospheric state parameters, such as wind and temperature, are sparse. One of the most popular techniques for measuring these parameters is to use a Fabry-Perot interferometer to monitor the Doppler width and breadth of naturally occurring airglow emissions in the thermosphere. In this work, we present a technique for estimating upper-atmospheric winds and temperatures from images of Fabry-Perot fringes captured by a CCD detector. We estimate instrument parameters from fringe patterns of a frequency-stabilized laser, and we use these parameters to estimate winds and temperatures from airglow fringe patterns. A unique feature of this technique is the model used for the laser and airglow fringe patterns, which fits all fringes simultaneously and attempts to model the effects of optical defects. This technique yields accurate estimates for winds, temperatures, and the associated uncertainties in these parameters, as we show with a Monte Carlo simulation.

  8. On-line adaptive battery impedance parameter and state estimation considering physical principles in reduced order equivalent circuit battery models part 2. Parameter and state estimation

    NASA Astrophysics Data System (ADS)

    Fleischer, Christian; Waag, Wladislaw; Heyn, Hans-Martin; Sauer, Dirk Uwe

    2014-09-01

    Lithium-ion battery systems employed in high power demanding systems such as electric vehicles require a sophisticated monitoring system to ensure safe and reliable operation. Three major states of the battery are of special interest and need to be constantly monitored. These include: battery state of charge (SoC), battery state of health (capacity fade determination, SoH), and state of function (power fade determination, SoF). The second paper concludes the series by presenting a multi-stage online parameter identification technique based on a weighted recursive least quadratic squares parameter estimator to determine the parameters of the proposed battery model from the first paper during operation. A novel mutation based algorithm is developed to determine the nonlinear current dependency of the charge-transfer resistance. The influence of diffusion is determined by an on-line identification technique and verified on several batteries at different operation conditions. This method guarantees a short response time and, together with its fully recursive structure, assures a long-term stable monitoring of the battery parameters. The relative dynamic voltage prediction error of the algorithm is reduced to 2%. The changes of parameters are used to determine the states of the battery. The algorithm is real-time capable and can be implemented on embedded systems.

  9. Establishment of a center of excellence for applied mathematical and statistical research

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Gray, H. L.

    1983-01-01

    The state of the art was assessed with regards to efforts in support of the crop production estimation problem and alternative generic proportion estimation techniques were investigated. Topics covered include modeling the greeness profile (Badhwarmos model), parameter estimation using mixture models such as CLASSY, and minimum distance estimation as an alternative to maximum likelihood estimation. Approaches to the problem of obtaining proportion estimates when the underlying distributions are asymmetric are examined including the properties of Weibull distribution.

  10. State Tracking and Fault Diagnosis for Dynamic Systems Using Labeled Uncertainty Graph.

    PubMed

    Zhou, Gan; Feng, Wenquan; Zhao, Qi; Zhao, Hongbo

    2015-11-05

    Cyber-physical systems such as autonomous spacecraft, power plants and automotive systems become more vulnerable to unanticipated failures as their complexity increases. Accurate tracking of system dynamics and fault diagnosis are essential. This paper presents an efficient state estimation method for dynamic systems modeled as concurrent probabilistic automata. First, the Labeled Uncertainty Graph (LUG) method in the planning domain is introduced to describe the state tracking and fault diagnosis processes. Because the system model is probabilistic, the Monte Carlo technique is employed to sample the probability distribution of belief states. In addition, to address the sample impoverishment problem, an innovative look-ahead technique is proposed to recursively generate most likely belief states without exhaustively checking all possible successor modes. The overall algorithms incorporate two major steps: a roll-forward process that estimates system state and identifies faults, and a roll-backward process that analyzes possible system trajectories once the faults have been detected. We demonstrate the effectiveness of this approach by applying it to a real world domain: the power supply control unit of a spacecraft.

  11. Estimating tree crown widths for the primary Acadian species in Maine

    Treesearch

    Matthew B. Russell; Aaron R. Weiskittel

    2012-01-01

    In this analysis, data for seven conifer and eight hardwood species were gathered from across the state of Maine for estimating tree crown widths. Maximum and largest crown width equations were developed using tree diameter at breast height as the primary predicting variable. Quantile regression techniques were used to estimate the maximum crown width and a constrained...

  12. Downward longwave surface radiation from sun-synchronous satellite data - Validation of methodology

    NASA Technical Reports Server (NTRS)

    Darnell, W. L.; Gupta, S. K.; Staylor, W. F.

    1986-01-01

    An extensive study has been carried out to validate a satellite technique for estimating downward longwave radiation at the surface. The technique, mostly developed earlier, uses operational sun-synchronous satellite data and a radiative transfer model to provide the surface flux estimates. The satellite-derived fluxes were compared directly with corresponding ground-measured fluxes at four different sites in the United States for a common one-year period. This provided a study of seasonal variations as well as a diversity of meteorological conditions. Dome heating errors in the ground-measured fluxes were also investigated and were corrected prior to the comparisons. Comparison of the monthly averaged fluxes from the satellite and ground sources for all four sites for the entire year showed a correlation coefficient of 0.98 and a standard error of estimate of 10 W/sq m. A brief description of the technique is provided, and the results validating the technique are presented.

  13. A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors

    PubMed Central

    Song, Yu; Nuske, Stephen; Scherer, Sebastian

    2016-01-01

    State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight. PMID:28025524

  14. Using the Delphi technique in economic evaluation: time to revisit the oracle?

    PubMed

    Simoens, S

    2006-12-01

    Although the Delphi technique has been commonly used as a data source in medical and health services research, its application in economic evaluation of medicines has been more limited. The aim of this study was to describe the methodology of the Delphi technique, to present a case for using the technique in economic evaluation, and to provide recommendations to improve such use. The literature was accessed through MEDLINE focusing on studies discussing the methodology of the Delphi technique and economic evaluations of medicines using the Delphi technique. The Delphi technique can be used to provide estimates of health care resources required and to modify such estimates when making inter-country comparisons. The Delphi technique can also contribute to mapping the treatment process under investigation, to identifying the appropriate comparator to be used, and to ensuring that the economic evaluation estimates cost-effectiveness rather than cost-efficacy. Ideally, economic evaluations of medicines should be based on real-patient data. In the absence of such data, evaluations need to incorporate the best evidence available by employing approaches such as the Delphi technique. Evaluations based on this approach should state the limitations, and explore the impact of the associated uncertainty in the results.

  15. Robust state estimation for uncertain fuzzy bidirectional associative memory networks with time-varying delays

    NASA Astrophysics Data System (ADS)

    Vadivel, P.; Sakthivel, R.; Mathiyalagan, K.; Arunkumar, A.

    2013-09-01

    This paper addresses the issue of robust state estimation for a class of fuzzy bidirectional associative memory (BAM) neural networks with time-varying delays and parameter uncertainties. By constructing the Lyapunov-Krasovskii functional, which contains the triple-integral term and using the free-weighting matrix technique, a set of sufficient conditions are derived in terms of linear matrix inequalities (LMIs) to estimate the neuron states through available output measurements such that the dynamics of the estimation error system is robustly asymptotically stable. In particular, we consider a generalized activation function in which the traditional assumptions on the boundedness, monotony and differentiability of the activation functions are removed. More precisely, the design of the state estimator for such BAM neural networks can be obtained by solving some LMIs, which are dependent on the size of the time derivative of the time-varying delays. Finally, a numerical example with simulation result is given to illustrate the obtained theoretical results.

  16. Techniques for estimating 7-day, 10-year low-flow characteristics for ungaged sites on streams in Mississippi

    USGS Publications Warehouse

    Telis, Pamela A.

    1992-01-01

    Mississippi State water laws require that the 7-day, 10-year low-flow characteristic (7Q10) of streams be used as a criterion for issuing wastedischarge permits to dischargers to streams and for limiting withdrawals of water from streams. This report presents techniques for estimating the 7Q10 for ungaged sites on streams in Mississippi based on the availability of baseflow discharge measurements at the site, location of nearby gaged sites on the same stream, and drainage area of the ungaged site. These techniques may be used to estimate the 7Q10 at sites on natural, unregulated or partially regulated, and non-tidal streams. Low-flow characteristics for streams in the Mississippi River alluvial plain were not estimated because the annual lowflow data exhibit decreasing trends with time. Also presented are estimates of the 7Q10 for 493 gaged sites on Mississippi streams.Techniques for estimating the 7Q10 have been developed for ungaged sites with base-flow discharge measurements, for ungaged sites on gaged streams, and for ungaged sites on ungaged streams. For an ungaged site with one or more base-flow discharge measurements, base-flow discharge data at the ungaged site are related to concurrent discharge data at a nearby gaged site. For ungaged sites on gaged streams, several methods of transferring the 7Q10 from a gaged site to an ungaged site were developed; the resulting 7Q10 values are based on drainage area prorations for the sites. For ungaged sites on ungaged streams, the 7Q10 is estimated from a map developed for. this study that shows the unit 7Q10 (7Q10 per square mile of drainage area) for ungaged basins in the State. The mapped values were estimated from the unit 7Q10 determined for nearby gaged basins, adjusted on the basis of the geology and topography of the ungaged basins.

  17. A Model-Based Approach to Inventory Stratification

    Treesearch

    Ronald E. McRoberts

    2006-01-01

    Forest inventory programs report estimates of forest variables for areas of interest ranging in size from municipalities to counties to States and Provinces. Classified satellite imagery has been shown to be an effective source of ancillary data that, when used with stratified estimation techniques, contributes to increased precision with little corresponding increase...

  18. Linear-Quadratic-Gaussian Regulator Developed for a Magnetic Bearing

    NASA Technical Reports Server (NTRS)

    Choi, Benjamin B.

    2002-01-01

    Linear-Quadratic-Gaussian (LQG) control is a modern state-space technique for designing optimal dynamic regulators. It enables us to trade off regulation performance and control effort, and to take into account process and measurement noise. The Structural Mechanics and Dynamics Branch at the NASA Glenn Research Center has developed an LQG control for a fault-tolerant magnetic bearing suspension rig to optimize system performance and to reduce the sensor and processing noise. The LQG regulator consists of an optimal state-feedback gain and a Kalman state estimator. The first design step is to seek a state-feedback law that minimizes the cost function of regulation performance, which is measured by a quadratic performance criterion with user-specified weighting matrices, and to define the tradeoff between regulation performance and control effort. The next design step is to derive a state estimator using a Kalman filter because the optimal state feedback cannot be implemented without full state measurement. Since the Kalman filter is an optimal estimator when dealing with Gaussian white noise, it minimizes the asymptotic covariance of the estimation error.

  19. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    PubMed

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  20. Full-field inspection of three-dimensional structures using steady-state acoustic wavenumber spectroscopy

    NASA Astrophysics Data System (ADS)

    Koskelo, Elise Anne C.; Flynn, Eric B.

    2017-02-01

    Inspection of and around joints, beams, and other three-dimensional structures is integral to practical nondestructive evaluation of large structures. Non-contact, scanning laser ultrasound techniques offer an automated means of physically accessing these regions. However, to realize the benefits of laser-scanning techniques, simultaneous inspection of multiple surfaces at different orientations to the scanner must not significantly degrade the signal level nor diminish the ability to distinguish defects from healthy geometric features. In this study, we evaluated the implementation of acoustic wavenumber spectroscopy for inspecting metal joints and crossbeams from interior angles. With this technique, we used a single-tone, steady-state, ultrasonic excitation to excite the joints via a single transducer attached to one surface. We then measured the full-field velocity responses using a scanning Laser Doppler vibrometer and produced maps of local wavenumber estimates. With the high signal level associated with steady-state excitation, scans could be performed at surface orientations of up to 45 degrees. We applied camera perspective projection transformations to remove the distortion in the scans due to a known projection angle, leading to a significant improvement in the local estimates of wavenumber. Projection leads to asymmetrical distortion in the wavenumber in one direction, making it possible to estimate view angle even when neither it nor the nominal wavenumber is known. Since plate thinning produces a purely symmetric increase in wavenumber, it also possible to independently estimate the degree of hidden corrosion. With a two-surface joint, using the wavenumber estimate maps, we were able to automatically calculate the orthographic projection component of each angled surface in the scan area.

  1. Sequential Monte Carlo filter for state estimation of LiFePO4 batteries based on an online updated model

    NASA Astrophysics Data System (ADS)

    Li, Jiahao; Klee Barillas, Joaquin; Guenther, Clemens; Danzer, Michael A.

    2014-02-01

    Battery state monitoring is one of the key techniques in battery management systems e.g. in electric vehicles. An accurate estimation can help to improve the system performance and to prolong the battery remaining useful life. Main challenges for the state estimation for LiFePO4 batteries are the flat characteristic of open-circuit-voltage over battery state of charge (SOC) and the existence of hysteresis phenomena. Classical estimation approaches like Kalman filtering show limitations to handle nonlinear and non-Gaussian error distribution problems. In addition, uncertainties in the battery model parameters must be taken into account to describe the battery degradation. In this paper, a novel model-based method combining a Sequential Monte Carlo filter with adaptive control to determine the cell SOC and its electric impedance is presented. The applicability of this dual estimator is verified using measurement data acquired from a commercial LiFePO4 cell. Due to a better handling of the hysteresis problem, results show the benefits of the proposed method against the estimation with an Extended Kalman filter.

  2. Tracking with time-delayed data in multisensor systems

    NASA Astrophysics Data System (ADS)

    Hilton, Richard D.; Martin, David A.; Blair, William D.

    1993-08-01

    When techniques for target tracking are expanded to make use of multiple sensors in a multiplatform system, the possibility of time delayed data becomes a reality. When a discrete-time Kalman filter is applied and some of the data entering the filter are delayed, proper processing of these late data is a necessity for obtaining an optimal estimate of a target's state. If this problem is not given special care, the quality of the state estimates can be degraded relative to that quality provided by a single sensor. A negative-time update technique is developed using the criteria of minimum mean-square error (MMSE) under the constraint that only the results of the most recent update are saved. The performance of the MMSE technique is compared to that of the ad hoc approach employed in the Cooperative Engagement Capabilities (CEC) system for processing data from multiple platforms. It was discovered that the MMSE technique is a stable solution to the negative-time update problem, while the CEC technique was found to be less than desirable when used with filters designed for tracking highly maneuvering targets at relatively low data rates. The MMSE negative-time update technique was found to be a superior alternative to the existing CEC negative-time update technique.

  3. An Adaptive Kalman Filter Using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  4. Support vector machines for nuclear reactor state estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavaljevski, N.; Gross, K. C.

    2000-02-14

    Validation of nuclear power reactor signals is often performed by comparing signal prototypes with the actual reactor signals. The signal prototypes are often computed based on empirical data. The implementation of an estimation algorithm which can make predictions on limited data is an important issue. A new machine learning algorithm called support vector machines (SVMS) recently developed by Vladimir Vapnik and his coworkers enables a high level of generalization with finite high-dimensional data. The improved generalization in comparison with standard methods like neural networks is due mainly to the following characteristics of the method. The input data space is transformedmore » into a high-dimensional feature space using a kernel function, and the learning problem is formulated as a convex quadratic programming problem with a unique solution. In this paper the authors have applied the SVM method for data-based state estimation in nuclear power reactors. In particular, they implemented and tested kernels developed at Argonne National Laboratory for the Multivariate State Estimation Technique (MSET), a nonlinear, nonparametric estimation technique with a wide range of applications in nuclear reactors. The methodology has been applied to three data sets from experimental and commercial nuclear power reactor applications. The results are promising. The combination of MSET kernels with the SVM method has better noise reduction and generalization properties than the standard MSET algorithm.« less

  5. Optimizing focal plane electric field estimation for detecting exoplanets

    NASA Astrophysics Data System (ADS)

    Groff, T.; Kasdin, N. J.; Riggs, A. J. E.

    Detecting extrasolar planets with angular separations and contrast levels similar to Earth requires a large space-based observatory and advanced starlight suppression techniques. This paper focuses on techniques employing an internal coronagraph, which is highly sensitive to optical errors and must rely on focal plane wavefront control techniques to achieve the necessary contrast levels. To maximize the available science time for a coronagraphic mission we demonstrate an estimation scheme using a discrete time Kalman filter. The state estimate feedback inherent to the filter allows us to minimize the number of exposures required to estimate the electric field. We also show progress including a bias estimate into the Kalman filter to eliminate incoherent light from the estimate. Since the exoplanets themselves are incoherent to the star, this has the added benefit of using the control history to gain certainty in the location of exoplanet candidates as the signal-to-noise between the planets and speckles improves. Having established a purely focal plane based wavefront estimation technique, we discuss a sensor fusion concept where alternate wavefront sensors feedforward a time update to the focal plane estimate to improve robustness to time varying speckle. The overall goal of this work is to reduce the time required for wavefront control on a target, thereby improving the observatory's planet detection performance by increasing the number of targets reachable during the lifespan of the mission.

  6. Calibrationless parallel magnetic resonance imaging: a joint sparsity model.

    PubMed

    Majumdar, Angshul; Chaudhury, Kunal Narayan; Ward, Rabab

    2013-12-05

    State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation) stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than) state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets-eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used-Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods-CS SENSE and l1SPIRiT and two calibration free techniques-Distributed CS and SAKE. Our method yields better reconstruction results than all of them.

  7. The application of LQR synthesis techniques to the turboshaft engine control problem

    NASA Technical Reports Server (NTRS)

    Pfeil, W. H.; De Los Reyes, G.; Bobula, G. A.

    1984-01-01

    A power turbine governor was designed for a recent-technology turboshaft engine coupled to a modern, articulated rotor system using Linear Quadratic Regulator (LQR) and Kalman Filter (KF) techniques. A linear, state-space model of the engine and rotor system was derived for six engine power settings from flight idle to maximum continuous. An integrator was appended to the fuel flow input to reduce the steady-state governor error to zero. Feedback gains were calculated for the system states at each power setting using the LQR technique. The main rotor tip speed state is not measurable, so a Kalman Filter of the rotor was used to estimate this state. The crossover of the system was increased to 10 rad/s compared to 2 rad/sec for a current governor. Initial computer simulations with a nonlinear engine model indicate a significant decrease in power turbine speed variation with the LQR governor compared to a conventional governor.

  8. A potential-energy surface study of the 2A1 and low-lying dissociative states of the methoxy radical

    NASA Technical Reports Server (NTRS)

    Jackels, C. F.

    1985-01-01

    Accurate, ab initio quantum chemical techniques are applied in the present study of low lying bound and dissociative states of the methoxy radical at C3nu conformations, using a double zeta quality basis set that is augmented with polarization and diffuse functions. Excitation energy estimates are obtained for vertical excitation, vertical deexcitation, and system origin. The rate of methoxy photolysis is estimated to be too small to warrant its inclusion in atmospheric models.

  9. Estimating the magnitude of peak flows for streams in Kentucky for selected recurrence intervals

    USGS Publications Warehouse

    Hodgkins, Glenn A.; Martin, Gary R.

    2003-01-01

    This report gives estimates of, and presents techniques for estimating, the magnitude of peak flows for streams in Kentucky for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years. A flowchart in this report guides the user to the appropriate estimates and (or) estimating techniques for a site on a specific stream. Estimates of peak flows are given for 222 U.S. Geological Survey streamflow-gaging stations in Kentucky. In the development of the peak-flow estimates at gaging stations, a new generalized skew coefficient was calculated for the State. This single statewide value of 0.011 (with a standard error of prediction of 0.520) is more appropriate for Kentucky than the national skew isoline map in Bulletin 17B of the Interagency Advisory Committee on Water Data. Regression equations are presented for estimating the peak flows on ungaged, unregulated streams in rural drainage basins. The equations were developed by use of generalized-least-squares regression procedures at 187 U.S. Geological Survey gaging stations in Kentucky and 51 stations in surrounding States. Kentucky was divided into seven flood regions. Total drainage area is used in the final regression equations as the sole explanatory variable, except in Regions 1 and 4 where main-channel slope also was used. The smallest average standard errors of prediction were in Region 3 (from -13.1 to +15.0 percent) and the largest average standard errors of prediction were in Region 5 (from -37.6 to +60.3 percent). One section of this report describes techniques for estimating peak flows for ungaged sites on gaged, unregulated streams in rural drainage basins. Another section references two previous U.S. Geological Survey reports for peak-flow estimates on ungaged, unregulated, urban streams. Estimating peak flows at ungaged sites on regulated streams is beyond the scope of this report, because peak flows on regulated streams are dependent upon variable human activities.

  10. SAFIS Area Estimation Techniques

    Treesearch

    Gregory A. Reams

    2000-01-01

    The Southern Annual Forest inventory System (SAFIS) is in various stages of implementation in 8 of the 13 southern states served by the Southern Research Station of the USDA Forest Service. Compared to periodic inventories, SAFIS requires more rapid generation of land use and land cover maps. The current photo system for phase one area estimation has changed little...

  11. SEASONAL AND REGIONAL VARIATIONS OF PRIMARY AND SECONDARY ORGANIC AEROSOLS OVER THE CONTINENTAL UNITED STATES: OBSERVATION-BASED ESTIMATES AND MODEL EVALUATION

    EPA Science Inventory

    Due to the lack of an analytical technique for directly quantifying the atmospheric concentrations of primary (OCpri) and secondary (OCsec) organic carbon aerosols, different indirect methods have been developed to estimate their concentrations. In this stu...

  12. SAFIS area estimation techniques

    Treesearch

    Gregory A. Reams

    2000-01-01

    The Southern Annual Forest Inventory System (SAFIS) is in various stages of implementation in 8 of the 13 southern states served by the Southern Research Station of the USDA Forest Service. Compared to periodic inventories, SAFIS requires more rapid generation of land use and land cover maps. The current photo system for phase one area estimation has changed little...

  13. An Estimation of Technical Efficiency for Florida Public Elementary Schools

    ERIC Educational Resources Information Center

    Conroy, Stephen J.; Arguea, Nestor M.

    2008-01-01

    We use a frontier production function estimation technique to analyze whether elementary schools in Florida are operating at an efficient level and to explain any inefficiencies. A motivation for this analysis comes from recent state and federal level educational initiatives designed to improve school accountability and reduce class sizes. Results…

  14. Computational methods for estimation of parameters in hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.; Murphy, K. A.

    1983-01-01

    Approximation techniques for estimating spatially varying coefficients and unknown boundary parameters in second order hyperbolic systems are discussed. Methods for state approximation (cubic splines, tau-Legendre) and approximation of function space parameters (interpolatory splines) are outlined and numerical findings for use of the resulting schemes in model "one dimensional seismic inversion' problems are summarized.

  15. UNCERTAINTY ON RADIATION DOSES ESTIMATED BY BIOLOGICAL AND RETROSPECTIVE PHYSICAL METHODS.

    PubMed

    Ainsbury, Elizabeth A; Samaga, Daniel; Della Monaca, Sara; Marrale, Maurizio; Bassinet, Celine; Burbidge, Christopher I; Correcher, Virgilio; Discher, Michael; Eakins, Jon; Fattibene, Paola; Güçlü, Inci; Higueras, Manuel; Lund, Eva; Maltar-Strmecki, Nadica; McKeever, Stephen; Rääf, Christopher L; Sholom, Sergey; Veronese, Ivan; Wieser, Albrecht; Woda, Clemens; Trompier, Francois

    2018-03-01

    Biological and physical retrospective dosimetry are recognised as key techniques to provide individual estimates of dose following unplanned exposures to ionising radiation. Whilst there has been a relatively large amount of recent development in the biological and physical procedures, development of statistical analysis techniques has failed to keep pace. The aim of this paper is to review the current state of the art in uncertainty analysis techniques across the 'EURADOS Working Group 10-Retrospective dosimetry' members, to give concrete examples of implementation of the techniques recommended in the international standards, and to further promote the use of Monte Carlo techniques to support characterisation of uncertainties. It is concluded that sufficient techniques are available and in use by most laboratories for acute, whole body exposures to highly penetrating radiation, but further work will be required to ensure that statistical analysis is always wholly sufficient for the more complex exposure scenarios.

  16. Weak Value Amplification is Suboptimal for Estimation and Detection

    NASA Astrophysics Data System (ADS)

    Ferrie, Christopher; Combes, Joshua

    2014-01-01

    We show by using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of single parameter estimation and signal detection. Specifically, we prove that postselection, a necessary ingredient for weak value amplification, decreases estimation accuracy and, moreover, arranging for anomalously large weak values is a suboptimal strategy. In doing so, we explicitly provide the optimal estimator, which in turn allows us to identify the optimal experimental arrangement to be the one in which all outcomes have equal weak values (all as small as possible) and the initial state of the meter is the maximal eigenvalue of the square of the system observable. Finally, we give precise quantitative conditions for when weak measurement (measurements without postselection or anomalously large weak values) can mitigate the effect of uncharacterized technical noise in estimation.

  17. Effective wind speed estimation: Comparison between Kalman Filter and Takagi-Sugeno observer techniques.

    PubMed

    Gauterin, Eckhard; Kammerer, Philipp; Kühn, Martin; Schulte, Horst

    2016-05-01

    Advanced model-based control of wind turbines requires knowledge of the states and the wind speed. This paper benchmarks a nonlinear Takagi-Sugeno observer for wind speed estimation with enhanced Kalman Filter techniques: The performance and robustness towards model-structure uncertainties of the Takagi-Sugeno observer, a Linear, Extended and Unscented Kalman Filter are assessed. Hence the Takagi-Sugeno observer and enhanced Kalman Filter techniques are compared based on reduced-order models of a reference wind turbine with different modelling details. The objective is the systematic comparison with different design assumptions and requirements and the numerical evaluation of the reconstruction quality of the wind speed. Exemplified by a feedforward loop employing the reconstructed wind speed, the benefit of wind speed estimation within wind turbine control is illustrated. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Prediction of slope stability based on numerical modeling of stress–strain state of rocks

    NASA Astrophysics Data System (ADS)

    Kozhogulov Nifadyev, KCh, VI; Usmanov, SF

    2018-03-01

    The paper presents the developed technique for the estimation of rock mass stability based on the finite element modeling of stress–strain state of rocks. The modeling results on the pit wall landslide as a flow of particles along a sloped surface are described.

  19. Conceptualizing Public Attitudes toward the Welfare State: A Comment on Hasenfeld and Rafferty.

    ERIC Educational Resources Information Center

    Emerson, Michael O.; Van Buren, Mark E.

    1992-01-01

    Using structural equation technique to replicate results of Hasenfeld and Rafferty's causal model predicting public attitudes toward welfare state programs with the social ideologies of work ethic and social rights. By incorporating estimates of measurement error, results failed to support the authors' original conclusions. Operationalizing key…

  20. Eigenspace techniques for active flutter suppression

    NASA Technical Reports Server (NTRS)

    Garrard, William L.; Liebst, Bradley S.; Farm, Jerome A.

    1987-01-01

    The use of eigenspace techniques for the design of an active flutter suppression system for a hypothetical research drone is discussed. One leading edge and two trailing edge aerodynamic control surfaces and four sensors (accelerometers) are available for each wing. Full state control laws are designed by selecting feedback gains which place closed loop eigenvalues and shape closed loop eigenvectors so as to stabilize wing flutter and reduce gust loads at the wing root while yielding accepatable robustness and satisfying constrains on rms control surface activity. These controllers are realized by state estimators designed using an eigenvalue placement/eigenvector shaping technique which results in recovery of the full state loop transfer characteristics. The resulting feedback compensators are shown to perform almost as well as the full state designs. They also exhibit acceptable performance in situations in which the failure of an actuator is simulated.

  1. [Potentials in the regionalization of health indicators using small-area estimation methods : Exemplary results based on the 2009, 2010 and 2012 GEDA studies].

    PubMed

    Kroll, Lars Eric; Schumann, Maria; Müters, Stephan; Lampert, Thomas

    2017-12-01

    Nationwide health surveys can be used to estimate regional differences in health. Using traditional estimation techniques, the spatial depth for these estimates is limited due to the constrained sample size. So far - without special refreshment samples - results have only been available for larger populated federal states of Germany. An alternative is regression-based small-area estimation techniques. These models can generate smaller-scale data, but are also subject to greater statistical uncertainties because of the model assumptions. In the present article, exemplary regionalized results based on the studies "Gesundheit in Deutschland aktuell" (GEDA studies) 2009, 2010 and 2012, are compared to the self-rated health status of the respondents. The aim of the article is to analyze the range of regional estimates in order to assess the usefulness of the techniques for health reporting more adequately. The results show that the estimated prevalence is relatively stable when using different samples. Important determinants of the variation of the estimates are the achieved sample size on the district level and the type of the district (cities vs. rural regions). Overall, the present study shows that small-area modeling of prevalence is associated with additional uncertainties compared to conventional estimates, which should be taken into account when interpreting the corresponding findings.

  2. Kalman Filtering with Inequality Constraints for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2003-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops two analytic methods of incorporating state variable inequality constraints in the Kalman filter. The first method is a general technique of using hard constraints to enforce inequalities on the state variable estimates. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The second method uses soft constraints to estimate state variables that are known to vary slowly with time. (Soft constraints are constraints that are required to be approximately satisfied rather than exactly satisfied.) The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results. The use of the algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate health parameters. The turbofan engine model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.

  3. Shape and Spatially-Varying Reflectance Estimation from Virtual Exemplars.

    PubMed

    Hui, Zhuo; Sankaranarayanan, Aswin C

    2017-10-01

    This paper addresses the problem of estimating the shape of objects that exhibit spatially-varying reflectance. We assume that multiple images of the object are obtained under a fixed view-point and varying illumination, i.e., the setting of photometric stereo. At the core of our techniques is the assumption that the BRDF at each pixel lies in the non-negative span of a known BRDF dictionary. This assumption enables a per-pixel surface normal and BRDF estimation framework that is computationally tractable and requires no initialization in spite of the underlying problem being non-convex. Our estimation framework first solves for the surface normal at each pixel using a variant of example-based photometric stereo. We design an efficient multi-scale search strategy for estimating the surface normal and subsequently, refine this estimate using a gradient descent procedure. Given the surface normal estimate, we solve for the spatially-varying BRDF by constraining the BRDF at each pixel to be in the span of the BRDF dictionary; here, we use additional priors to further regularize the solution. A hallmark of our approach is that it does not require iterative optimization techniques nor the need for careful initialization, both of which are endemic to most state-of-the-art techniques. We showcase the performance of our technique on a wide range of simulated and real scenes where we outperform competing methods.

  4. Reconstructing the hidden states in time course data of stochastic models.

    PubMed

    Zimmer, Christoph

    2015-11-01

    Parameter estimation is central for analyzing models in Systems Biology. The relevance of stochastic modeling in the field is increasing. Therefore, the need for tailored parameter estimation techniques is increasing as well. Challenges for parameter estimation are partial observability, measurement noise, and the computational complexity arising from the dimension of the parameter space. This article extends the multiple shooting for stochastic systems' method, developed for inference in intrinsic stochastic systems. The treatment of extrinsic noise and the estimation of the unobserved states is improved, by taking into account the correlation between unobserved and observed species. This article demonstrates the power of the method on different scenarios of a Lotka-Volterra model, including cases in which the prey population dies out or explodes, and a Calcium oscillation system. Besides showing how the new extension improves the accuracy of the parameter estimates, this article analyzes the accuracy of the state estimates. In contrast to previous approaches, the new approach is well able to estimate states and parameters for all the scenarios. As it does not need stochastic simulations, it is of the same order of speed as conventional least squares parameter estimation methods with respect to computational time. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  5. A framework with nonlinear system model and nonparametric noise for gas turbine degradation state estimation

    NASA Astrophysics Data System (ADS)

    Hanachi, Houman; Liu, Jie; Banerjee, Avisekh; Chen, Ying

    2015-06-01

    Modern health management approaches for gas turbine engines (GTEs) aim to precisely estimate the health state of the GTE components to optimize maintenance decisions with respect to both economy and safety. In this research, we propose an advanced framework to identify the most likely degradation state of the turbine section in a GTE for prognostics and health management (PHM) applications. A novel nonlinear thermodynamic model is used to predict the performance parameters of the GTE given the measurements. The ratio between real efficiency of the GTE and simulated efficiency in the newly installed condition is defined as the health indicator and provided at each sequence. The symptom of nonrecoverable degradations in the turbine section, i.e. loss of turbine efficiency, is assumed to be the internal degradation state. A regularized auxiliary particle filter (RAPF) is developed to sequentially estimate the internal degradation state in nonuniform time sequences upon receiving sets of new measurements. The effectiveness of the technique is examined using the operating data over an entire time-between-overhaul cycle of a simple-cycle industrial GTE. The results clearly show the trend of degradation in the turbine section and the occasional fluctuations, which are well supported by the service history of the GTE. The research also suggests the efficacy of the proposed technique to monitor the health state of the turbine section of a GTE by implementing model-based PHM without the need for additional instrumentation.

  6. Set-theoretic estimation of hybrid system configurations.

    PubMed

    Benazera, Emmanuel; Travé-Massuyès, Louise

    2009-10-01

    Hybrid systems serve as a powerful modeling paradigm for representing complex continuous controlled systems that exhibit discrete switches in their dynamics. The system and the models of the system are nondeterministic due to operation in uncertain environment. Bayesian belief update approaches to stochastic hybrid system state estimation face a blow up in the number of state estimates. Therefore, most popular techniques try to maintain an approximation of the true belief state by either sampling or maintaining a limited number of trajectories. These limitations can be avoided by using bounded intervals to represent the state uncertainty. This alternative leads to splitting the continuous state space into a finite set of possibly overlapping geometrical regions that together with the system modes form configurations of the hybrid system. As a consequence, the true system state can be captured by a finite number of hybrid configurations. A set of dedicated algorithms that can efficiently compute these configurations is detailed. Results are presented on two systems of the hybrid system literature.

  7. A dynamic programming approach to estimate the capacity value of energy storage

    DOE PAGES

    Sioshansi, Ramteen; Madaeni, Seyed Hossein; Denholm, Paul

    2013-09-17

    Here, we present a method to estimate the capacity value of storage. Our method uses a dynamic program to model the effect of power system outages on the operation and state of charge of storage in subsequent periods. We combine the optimized dispatch from the dynamic program with estimated system loss of load probabilities to compute a probability distribution for the state of charge of storage in each period. This probability distribution can be used as a forced outage rate for storage in standard reliability-based capacity value estimation methods. Our proposed method has the advantage over existing approximations that itmore » explicitly captures the effect of system shortage events on the state of charge of storage in subsequent periods. We also use a numerical case study, based on five utility systems in the U.S., to demonstrate our technique and compare it to existing approximation methods.« less

  8. The prevalence of silicosis in Orange Free State gold miners.

    PubMed

    Cowie, R L; van Schalkwyk, M G

    1987-01-01

    The prevalence of silicosis in the migrant laborer in the South African, Orange Free State gold mines has not previously been estimated. Two methods were used to estimate the prevalence of silicosis in this population. The two techniques are described. The difference between the two estimates illustrates the difficulty of epidemiologic studies in this type of working population. It is noted that the highest estimate of 138 cases per 10,000 workers is certainly less than the true prevalence of the disorder. The use of routine miniature (100-mm) chest radiographs for the detection of silicosis was validated through comparison with normal size (125-kV radiographs and through analysis of the consistency of reading of second miniature films from the same subjects.

  9. Uniform stable observer for the disturbance estimation in two renewable energy systems.

    PubMed

    Rubio, José de Jesús; Ochoa, Genaro; Balcazar, Ricardo; Pacheco, Jaime

    2015-09-01

    In this study, an observer for the states and disturbance estimation in two renewable energy systems is introduced. The restrictions of the gains in the proposed observer are found to guarantee its stability and the convergence of its error; furthermore, these results are utilized to obtain a good estimation. The introduced technique is applied for the states and disturbance estimation in a wind turbine and an electric vehicle. The wind turbine has a rotatory tower to catch the incoming air to be transformed in electricity and the electric vehicle has generators connected with its wheels to catch the vehicle movement to be transformed in electricity. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Valuing morbidity from wildfire smoke exposure: A comparison of revealed and stated preference techniques

    Treesearch

    Leslie Richardson; John B. Loomis; Patricia A. Champ

    2013-01-01

    Estimating the economic benefits of reduced health damages due to improvements in environmental quality continues to challenge economists. We review welfare measures associated with reduced wildfire smoke exposure, and a unique dataset from California's Station Fire of 2009 allows for a comparison of cost of illness (COI) estimates with willingness to pay (WTP)...

  11. The modular modality frame model: continuous body state estimation and plausibility-weighted information fusion.

    PubMed

    Ehrenfeld, Stephan; Butz, Martin V

    2013-02-01

    Humans show admirable capabilities in movement planning and execution. They can perform complex tasks in various contexts, using the available sensory information very effectively. Body models and continuous body state estimations appear necessary to realize such capabilities. We introduce the Modular Modality Frame (MMF) model, which maintains a highly distributed, modularized body model continuously updating, modularized probabilistic body state estimations over time. Modularization is realized with respect to modality frames, that is, sensory modalities in particular frames of reference and with respect to particular body parts. We evaluate MMF performance on a simulated, nine degree of freedom arm in 3D space. The results show that MMF is able to maintain accurate body state estimations despite high sensor and motor noise. Moreover, by comparing the sensory information available in different modality frames, MMF can identify faulty sensory measurements on the fly. In the near future, applications to lightweight robot control should be pursued. Moreover, MMF may be enhanced with neural encodings by introducing neural population codes and learning techniques. Finally, more dexterous goal-directed behavior should be realized by exploiting the available redundant state representations.

  12. Anomaly Monitoring Method for Key Components of Satellite

    PubMed Central

    Fan, Linjun; Xiao, Weidong; Tang, Jun

    2014-01-01

    This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703

  13. Economic values of wilderness recreation and passive use: what we think we know at the beginning of the 21st century

    Treesearch

    John B. Loomis

    2000-01-01

    Two techniques are used to estimate the economic value of recreation and off-site passive use values of wilderness. Using an average value per recreation day ($39), the economic value of wilderness recreation is estimated to be $574 million annually. Generalizing the two Western passive use values studies we estimate values of Western wilderness in the lower 48 states...

  14. An integrated study of earth resources in the state of California using remote sensing techniques

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. A weighted stratified double sample design using hardcopy LANDSAT-1 and ground data was utilized in developmental studies for snow water content estimation. Study results gave a correlation coefficient of 0.80 between LANDSAT sample units estimates of snow water content and ground subsamples. A basin snow water content estimate allowable error was given as 1.00 percent at the 99 percent confidence level with the same budget level utilized in conventional snow surveys. Several evapotranspiration estimation models were selected for efficient application at each level of data to be sampled. An area estimation procedure for impervious surface types of differing impermeability adjacent to stream channels was developed. This technique employs a double sample of 1:125,000 color infrared hightflight transparency data with ground or large scale photography.

  15. Correlation techniques to determine model form in robust nonlinear system realization/identification

    NASA Technical Reports Server (NTRS)

    Stry, Greselda I.; Mook, D. Joseph

    1991-01-01

    The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.

  16. VO2 and VCO2 variabilities through indirect calorimetry instrumentation.

    PubMed

    Cadena-Méndez, Miguel; Escalante-Ramírez, Boris; Azpiroz-Leehan, Joaquín; Infante-Vázquez, Oscar

    2013-01-01

    The aim of this paper is to understand how to measure the VO2 and VCO2 variabilities in indirect calorimetry (IC) since we believe they can explain the high variation in the resting energy expenditure (REE) estimation. We propose that variabilities should be separately measured from the VO2 and VCO2 averages to understand technological differences among metabolic monitors when they estimate the REE. To prove this hypothesis the mixing chamber (MC) and the breath-by-breath (BbB) techniques measured the VO2 and VCO2 averages and their variabilities. Variances and power spectrum energies in the 0-0.5 Hertz band were measured to establish technique differences in steady and non-steady state. A hybrid calorimeter with both IC techniques studied a population of 15 volunteers that underwent the clino-orthostatic maneuver in order to produce the two physiological stages. The results showed that inter-individual VO2 and VCO2 variabilities measured as variances were negligible using the MC while variabilities measured as spectral energies using the BbB underwent 71 and 56% (p < 0.05), increase respectively. Additionally, the energy analysis showed an unexpected cyclic rhythm at 0.025 Hertz only during the orthostatic stage, which is new physiological information, not reported previusly. The VO2 and VCO2 inter-individual averages increased to 63 and 39% by the MC (p < 0.05) and 32 and 40% using the BbB (p < 0.1), respectively, without noticeable statistical differences among techniques. The conclusions are: (a) metabolic monitors should simultaneously include the MC and the BbB techniques to correctly interpret the steady or non-steady state variabilities effect in the REE estimation, (b) the MC is the appropriate technique to compute averages since it behaves as a low-pass filter that minimizes variances, (c) the BbB is the ideal technique to measure the variabilities since it can work as a high-pass filter to generate discrete time series able to accomplish spectral analysis, and (d) the new physiological information in the VO2 and VCO2 variabilities can help to understand why metabolic monitors with dissimilar IC techniques give different results in the REE estimation.

  17. Polarimetric Radar Observations of Forest State for Determination of Ecosystem Processes

    NASA Technical Reports Server (NTRS)

    Ulaby, Fawwaz T.; Dobson, M. Craig; Sharik, T.

    1996-01-01

    The objectives of this research are to test the hypotheses that ecologically significant forest state parameters may be estimated from SAR data. These include estimation of above ground biomass, plant water status, and near surface soil moisture under certain forest conditions. Test hypotheses in the northern hardwoods forest community, refine them if necessary, and establish techniques for retrieving this information from orbital SARs such as SIR-C/X-SAR. This report summarizes (1) recent progress, (2) significant results and (3) research plans concerning SIR-C/X-SAR research.

  18. Product code optimization for determinate state LDPC decoding in robust image transmission.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  19. New method for propagating the square root covariance matrix in triangular form. [using Kalman-Bucy filter

    NASA Technical Reports Server (NTRS)

    Choe, C. Y.; Tapley, B. D.

    1975-01-01

    A method proposed by Potter of applying the Kalman-Bucy filter to the problem of estimating the state of a dynamic system is described, in which the square root of the state error covariance matrix is used to process the observations. A new technique which propagates the covariance square root matrix in lower triangular form is given for the discrete observation case. The technique is faster than previously proposed algorithms and is well-adapted for use with the Carlson square root measurement algorithm.

  20. The constrained discrete-time state-dependent Riccati equation technique for uncertain nonlinear systems

    NASA Astrophysics Data System (ADS)

    Chang, Insu

    The objective of the thesis is to introduce a relatively general nonlinear controller/estimator synthesis framework using a special type of the state-dependent Riccati equation technique. The continuous time state-dependent Riccati equation (SDRE) technique is extended to discrete-time under input and state constraints, yielding constrained (C) discrete-time (D) SDRE, referred to as CD-SDRE. For the latter, stability analysis and calculation of a region of attraction are carried out. The derivation of the D-SDRE under state-dependent weights is provided. Stability of the D-SDRE feedback system is established using Lyapunov stability approach. Receding horizon strategy is used to take into account the constraints on D-SDRE controller. Stability condition of the CD-SDRE controller is analyzed by using a switched system. The use of CD-SDRE scheme in the presence of constraints is then systematically demonstrated by applying this scheme to problems of spacecraft formation orbit reconfiguration under limited performance on thrusters. Simulation results demonstrate the efficacy and reliability of the proposed CD-SDRE. The CD-SDRE technique is further investigated in a case where there are uncertainties in nonlinear systems to be controlled. First, the system stability under each of the controllers in the robust CD-SDRE technique is separately established. The stability of the closed-loop system under the robust CD-SDRE controller is then proven based on the stability of each control system comprising switching configuration. A high fidelity dynamical model of spacecraft attitude motion in 3-dimensional space is derived with a partially filled fuel tank, assumed to have the first fuel slosh mode. The proposed robust CD-SDRE controller is then applied to the spacecraft attitude control system to stabilize its motion in the presence of uncertainties characterized by the first fuel slosh mode. The performance of the robust CD-SDRE technique is discussed. Subsequently, filtering techniques are investigated by using the D-SDRE technique. Detailed derivation of the D-SDRE-based filter (D-SDREF) is provided under the assumption of Gaussian noises and the stability condition of the error signal between the measured signal and the estimated signals is proven to be input-to-state stable. For the non-Gaussian distributed noises, we propose a filter by combining the D-SDREF and the particle filter (PF), named the combined D-SDRE/PF. Two algorithms for the filtering techniques are provided. Several filtering techniques are compared with challenging numerical examples to show the reliability and efficacy of the proposed D-SDREF and the combined D-SDRE/PF.

  1. State tobacco control expenditures and tax paid cigarette sales

    PubMed Central

    Tauras, John A.; Xu, Xin; Huang, Jidong; King, Brian; Lavinghouze, S. Rene; Sneegas, Karla S.; Chaloupka, Frank J.

    2018-01-01

    This research is the first nationally representative study to examine the relationship between actual state-level tobacco control spending in each of the 5 CDC’s Best Practices for Comprehensive Tobacco Control Program categories and cigarette sales. We employed several alternative two-way fixed-effects regression techniques to estimate the determinants of cigarette sales in the United States for the years 2008–2012. State spending on tobacco control was found to have a negative and significant impact on cigarette sales in all models that were estimated. Spending in the areas of cessation interventions, health communication interventions, and state and community interventions were found to have a negative impact on cigarette sales in all models that were estimated, whereas spending in the areas of surveillance and evaluation, and administration and management were found to have negative effects on cigarette sales in only some models. Our models predict that states that spend up to seven times their current levels could still see significant reductions in cigarette sales. The findings from this research could help inform further investments in state tobacco control programs. PMID:29652890

  2. Inefficiency of Signal Amplification by Post-selection

    NASA Astrophysics Data System (ADS)

    Tanaka, Saki; Yamamoto, Naoki

    Basing the two-state vector formalism, Aharonov, Albert and Vaidman found a measurement way such that spin 1/2 particle can turn out 100 [1]. The measurement result is called weak value and this value depends on pre-and post- selected states. The weak value becomes infinitely large when the post- selected state is orthogonal to pre-selected state. By using this feature, the weak measurement has been applied to amplification technique. However, the success of the post-selection depends on luck and this technique does not always work. We take into account of loss by post-selection, and evaluate this amplification by quantum estimation theory. As a result, we get an inequality which means that post-selection does not improve estate accuracy when the number of states is limited.

  3. Two self-test methods applied to an inertial system problem. [estimating gyroscope and accelerometer bias

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.; Deyst, J. J.; Crawford, B. S.

    1975-01-01

    The paper describes two self-test procedures applied to the problem of estimating the biases in accelerometers and gyroscopes on an inertial platform. The first technique is the weighted sum-squared residual (WSSR) test, with which accelerator bias jumps are easily isolated, but gyro bias jumps are difficult to isolate. The WSSR method does not take full advantage of the knowledge of system dynamics. The other technique is a multiple hypothesis method developed by Buxbaum and Haddad (1969). It has the advantage of directly providing jump isolation information, but suffers from computational problems. It might be possible to use the WSSR to detect state jumps and then switch to the BH system for jump isolation and estimate compensation.

  4. A program to form a multidisciplinary data base and analysis for dynamic systems

    NASA Technical Reports Server (NTRS)

    Taylor, L. W.; Suit, W. T.; Mayo, M. H.

    1984-01-01

    Diverse sets of experimental data and analysis programs have been assembled for the purpose of facilitating research in systems identification, parameter estimation and state estimation techniques. The data base analysis programs are organized to make it easy to compare alternative approaches. Additional data and alternative forms of analysis will be included as they become available.

  5. Estimating the concrete compressive strength using hard clustering and fuzzy clustering based regression techniques.

    PubMed

    Nagwani, Naresh Kumar; Deo, Shirish V

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm.

  6. Estimating the Concrete Compressive Strength Using Hard Clustering and Fuzzy Clustering Based Regression Techniques

    PubMed Central

    Nagwani, Naresh Kumar; Deo, Shirish V.

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  7. Finite-time output feedback control of uncertain switched systems via sliding mode design

    NASA Astrophysics Data System (ADS)

    Zhao, Haijuan; Niu, Yugang; Song, Jun

    2018-04-01

    The problem of sliding mode control (SMC) is investigated for a class of uncertain switched systems subject to unmeasurable state and assigned finite (possible short) time constraint. A key issue is how to ensure the finite-time boundedness (FTB) of system state during reaching phase and sliding motion phase. To this end, a state observer is constructed to estimate the unmeasured states. And then, a state estimate-based SMC law is designed such that the state trajectories can be driven onto the specified integral sliding surface during the assigned finite time interval. By means of partitioning strategy, the corresponding FTB over reaching phase and sliding motion phase are guaranteed and the sufficient conditions are derived via average dwell time technique. Finally, an illustrative example is given to illustrate the proposed method.

  8. The application of LQR synthesis techniques to the turboshaft engine control problem. [Linear Quadratic Regulator

    NASA Technical Reports Server (NTRS)

    Pfeil, W. H.; De Los Reyes, G.; Bobula, G. A.

    1985-01-01

    A power turbine governor was designed for a recent-technology turboshaft engine coupled to a modern, articulated rotor system using Linear Quadratic Regulator (LQR) and Kalman Filter (KF) techniques. A linear, state-space model of the engine and rotor system was derived for six engine power settings from flight idle to maximum continuous. An integrator was appended to the fuel flow input to reduce the steady-state governor error to zero. Feedback gains were calculated for the system states at each power setting using the LQR technique. The main rotor tip speed state is not measurable, so a Kalman Filter of the rotor was used to estimate this state. The crossover of the system was increased to 10 rad/s compared to 2 rad/sec for a current governor. Initial computer simulations with a nonlinear engine model indicate a significant decrease in power turbine speed variation with the LQR governor compared to a conventional governor.

  9. Earth Observation System Flight Dynamics System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Tracewell, David

    2016-01-01

    This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.

  10. A new approach to non-invasive oxygenated mixed venous PCO(sub)2

    NASA Technical Reports Server (NTRS)

    Fisher, Joseph A.; Ansel, Clifford A.

    1986-01-01

    A clinically practical technique was developed to calculate mixed venous CO2 partial pressure for the calculation of cardiac output by the Fick technique. The Fick principle states that the cardiac output is equal to the CO2 production divided by the arterio-venous CO2 content difference of the pulmonary vessels. A review of the principles involved in the various techniques used to estimate venous CO2 partial pressure is presented.

  11. Wetland Hydrology

    EPA Science Inventory

    This chapter discusses the state of the science in wetland hydrology by touching upon the major hydraulic and hydrologic processes in these complex ecosystems, their measurement/estimation techniques, and modeling methods. It starts with the definition of wetlands, their benefit...

  12. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  13. Accurately estimating PSF with straight lines detected by Hough transform

    NASA Astrophysics Data System (ADS)

    Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong

    2018-04-01

    This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.

  14. Assessing estimation techniques for missing plot observations in the U.S. forest inventory

    Treesearch

    Grant M. Domke; Christopher W. Woodall; Ronald E. McRoberts; James E. Smith; Mark A. Hatfield

    2012-01-01

    The U.S. Forest Service, Forest Inventory and Analysis Program made a transition from state-by-state periodic forest inventories--with reporting standards largely tailored to regional requirements--to a nationally consistent, annual inventory tailored to large-scale strategic requirements. Lack of measurements on all forest land during the periodic inventory, along...

  15. Eight Stars of Gold--The Story of Alaska's Flag. Middle School Activities (Grades 6-8).

    ERIC Educational Resources Information Center

    Alaska State Museum, Juneau.

    This activities booklet focuses on the story of Alaska's state flag. The booklet is for use in teaching middle school students. Each activity contains: background information, a summary and time estimate, Alaska state standards, a step-by-step technique for classroom implementation of the activity, assessment tips, materials and resources needed,…

  16. On experimental damage localization by SP2E: Application of H∞ estimation and oblique projections

    NASA Astrophysics Data System (ADS)

    Lenzen, Armin; Vollmering, Max

    2018-05-01

    In this article experimental damage localization based on H∞ estimation and state projection estimation error (SP2E) is studied. Based on an introduced difference process, a state space representation is derived for advantageous numerical solvability. Because real structural excitations are presumed to be unknown, a general input is applied therein, which allows synchronization and normalization. Furthermore, state projections are introduced to enhance damage identification. While first experiments to verify method SP2E have already been conducted and published, further laboratory results are analyzed here. Therefore, SP2E is used to experimentally localize stiffness degradations and mass alterations. Furthermore, the influence of projection techniques is analyzed. In summary, method SP2E is able to localize structural alterations, which has been observed by results of laboratory experiments.

  17. Some modifications of Newton's method for the determination of the steady-state response of nonlinear oscillatory circuits

    NASA Astrophysics Data System (ADS)

    Grosz, F. B., Jr.; Trick, T. N.

    1982-07-01

    It is proposed that nondominant states should be eliminated from the Newton algorithm in the steady-state analysis of nonlinear oscillatory systems. This technique not only improves convergence, but also reduces the size of the sensitivity matrix so that less computation is required for each iteration. One or more periods of integration should be performed after each periodic state estimation before the sensitivity computations are made for the next periodic state estimation. These extra periods of integration between Newton iterations are found to allow the fast states due to parasitic effects to settle, which enables the Newton algorithm to make a better prediction. In addition, the reliability of the algorithm is improved in high Q oscillator circuits by both local and global damping in which the amount of damping is proportional to the difference between the initial and final state values.

  18. Valuing morbidity from wildfire smoke exposure: a comparison of revealed and stated preference techniques

    USGS Publications Warehouse

    Richardson, Leslie; Loomis, John B.; Champ, Patricia A.

    2013-01-01

    Estimating the economic benefits of reduced health damages due to improvements in environmental quality continues to challenge economists. We review welfare measures associated with reduced wildfire smoke exposure, and a unique dataset from California’s Station Fire of 2009 allows for a comparison of cost of illness (COI) estimates with willingness to pay (WTP) measures. The WTP for one less symptom day is estimated to be $87 and $95, using the defensive behavior and contingent valuation methods, respectively. These WTP estimates are not statistically different but do differ from a $3 traditional daily COI estimate and $17 comprehensive daily COI estimate.

  19. Minimax estimation of qubit states with Bures risk

    NASA Astrophysics Data System (ADS)

    Acharya, Anirudh; Guţă, Mădălin

    2018-04-01

    The central problem of quantum statistics is to devise measurement schemes for the estimation of an unknown state, given an ensemble of n independent identically prepared systems. For locally quadratic loss functions, the risk of standard procedures has the usual scaling of 1/n. However, it has been noticed that for fidelity based metrics such as the Bures distance, the risk of conventional (non-adaptive) qubit tomography schemes scales as 1/\\sqrt{n} for states close to the boundary of the Bloch sphere. Several proposed estimators appear to improve this scaling, and our goal is to analyse the problem from the perspective of the maximum risk over all states. We propose qubit estimation strategies based on separate adaptive measurements, and collective measurements, that achieve 1/n scalings for the maximum Bures risk. The estimator involving local measurements uses a fixed fraction of the available resource n to estimate the Bloch vector direction; the length of the Bloch vector is then estimated from the remaining copies by measuring in the estimator eigenbasis. The estimator based on collective measurements uses local asymptotic normality techniques which allows us to derive upper and lower bounds to its maximum Bures risk. We also discuss how to construct a minimax optimal estimator in this setup. Finally, we consider quantum relative entropy and show that the risk of the estimator based on collective measurements achieves a rate O(n-1log n) under this loss function. Furthermore, we show that no estimator can achieve faster rates, in particular the ‘standard’ rate n ‑1.

  20. Multimodel Kalman filtering for adaptive nonuniformity correction in infrared sensors.

    PubMed

    Pezoa, Jorge E; Hayat, Majeed M; Torres, Sergio N; Rahman, Md Saifur

    2006-06-01

    We present an adaptive technique for the estimation of nonuniformity parameters of infrared focal-plane arrays that is robust with respect to changes and uncertainties in scene and sensor characteristics. The proposed algorithm is based on using a bank of Kalman filters in parallel. Each filter independently estimates state variables comprising the gain and the bias matrices of the sensor, according to its own dynamic-model parameters. The supervising component of the algorithm then generates the final estimates of the state variables by forming a weighted superposition of all the estimates rendered by each Kalman filter. The weights are computed and updated iteratively, according to the a posteriori-likelihood principle. The performance of the estimator and its ability to compensate for fixed-pattern noise is tested using both simulated and real data obtained from two cameras operating in the mid- and long-wave infrared regime.

  1. Measurement of soil carbon oxidation state and oxidative ratio by 13C nuclear magnetic resonance

    USGS Publications Warehouse

    Hockaday, W.C.; Masiello, C.A.; Randerson, J.T.; Smernik, R.J.; Baldock, J.A.; Chadwick, O.A.; Harden, J.W.

    2009-01-01

    The oxidative ratio (OR) of the net ecosystem carbon balance is the ratio of net O2 and CO2 fluxes resulting from photosynthesis, respiration, decomposition, and other lateral and vertical carbon flows. The OR of the terrestrial biosphere must be well characterized to accurately estimate the terrestrial CO2 sink using atmospheric measurements of changing O2 and CO2 levels. To estimate the OR of the terrestrial biosphere, measurements are needed of changes in the OR of aboveground and belowground carbon pools associated with decadal timescale disturbances (e.g., land use change and fire). The OR of aboveground pools can be measured using conventional approaches including elemental analysis. However, measuring the OR of soil carbon pools is technically challenging, and few soil OR data are available. In this paper we test three solid-state nuclear magnetic resonance (NMR) techniques for measuring soil OR, all based on measurements of the closely related parameter, organic carbon oxidation state (Cox). Two of the three techniques make use of a molecular mixing model which converts NMR spectra into concentrations of a standard suite of biological molecules of known C ox. The third technique assigns Cox values to each peak in the NMR spectrum. We assess error associated with each technique using pure chemical compounds and plant biomass standards whose Cox and OR values can be directly measured by elemental analyses. The most accurate technique, direct polarization solid-state 13C NMR with the molecular mixing model, agrees with elemental analyses to ??0.036 Cox units (??0.009 OR units). Using this technique, we show a large natural variability in soil Cox and OR values. Soil Cox values have a mean of -0.26 and a range from -0.45 to 0.30, corresponding to OR values of 1.08 ?? 0.06 and a range from 0.96 to 1.22. We also estimate the OR of the carbon flux from a boreal forest fire. Analysis of soils from nearby intact soil profiles imply that soil carbon losses associated with the fire had an OR of 1.091 (??0.003). Fire appears to be a major factor driving the soil C pool to higher oxidation states and lower OR values. Episodic fluxes caused by disturbances like fire may have substantially different ORs from ecosystem respiration fluxes and therefore should be better quantified to reduce uncertainties associated with our understanding of the global atmospheric carbon budget. Copyright 2009 by the American Geophysical Union.

  2. Dynamic state estimation assisted power system monitoring and protection

    NASA Astrophysics Data System (ADS)

    Cui, Yinan

    The advent of phasor measurement units (PMUs) has unlocked several novel methods to monitor, control, and protect bulk electric power systems. This thesis introduces the concept of "Dynamic State Estimation" (DSE), aided by PMUs, for wide-area monitoring and protection of power systems. Unlike traditional State Estimation where algebraic variables are estimated from system measurements, DSE refers to a process to estimate the dynamic states associated with synchronous generators. This thesis first establishes the viability of using particle filtering as a technique to perform DSE in power systems. The utility of DSE for protection and wide-area monitoring are then shown as potential novel applications. The work is presented as a collection of several journal and conference papers. In the first paper, we present a particle filtering approach to dynamically estimate the states of a synchronous generator in a multi-machine setting considering the excitation and prime mover control systems. The second paper proposes an improved out-of-step detection method for generators by means of angular difference. The generator's rotor angle is estimated with a particle filter-based dynamic state estimator and the angular separation is then calculated by combining the raw local phasor measurements with this estimate. The third paper introduces a particle filter-based dual estimation method for tracking the dynamic states of a synchronous generator. It considers the situation where the field voltage measurements are not readily available. The particle filter is modified to treat the field voltage as an unknown input which is sequentially estimated along with the other dynamic states. The fourth paper proposes a novel framework for event detection based on energy functions. The key idea is that any event in the system will leave a signature in WAMS data-sets. It is shown that signatures for four broad classes of disturbance events are buried in the components that constitute the energy function for the system. This establishes a direct correspondence (or mapping) between an event and certain component(s) of the energy function. The last paper considers the dynamic latency effect when the measurements and estimated dynamics are transmitted from remote ends to a centralized location through the networks.

  3. Demonstration of universal parametric entangling gates on a multi-qubit lattice

    PubMed Central

    Reagor, Matthew; Osborn, Christopher B.; Tezak, Nikolas; Staley, Alexa; Prawiroatmodjo, Guenevere; Scheer, Michael; Alidoust, Nasser; Sete, Eyob A.; Didier, Nicolas; da Silva, Marcus P.; Acala, Ezer; Angeles, Joel; Bestwick, Andrew; Block, Maxwell; Bloom, Benjamin; Bradley, Adam; Bui, Catvu; Caldwell, Shane; Capelluto, Lauren; Chilcott, Rick; Cordova, Jeff; Crossman, Genya; Curtis, Michael; Deshpande, Saniya; El Bouayadi, Tristan; Girshovich, Daniel; Hong, Sabrina; Hudson, Alex; Karalekas, Peter; Kuang, Kat; Lenihan, Michael; Manenti, Riccardo; Manning, Thomas; Marshall, Jayss; Mohan, Yuvraj; O’Brien, William; Otterbach, Johannes; Papageorge, Alexander; Paquette, Jean-Philip; Pelstring, Michael; Polloreno, Anthony; Rawat, Vijay; Ryan, Colm A.; Renzas, Russ; Rubin, Nick; Russel, Damon; Rust, Michael; Scarabelli, Diego; Selvanayagam, Michael; Sinclair, Rodney; Smith, Robert; Suska, Mark; To, Ting-Wai; Vahidpour, Mehrnoosh; Vodrahalli, Nagesh; Whyland, Tyler; Yadav, Kamal; Zeng, William; Rigetti, Chad T.

    2018-01-01

    We show that parametric coupling techniques can be used to generate selective entangling interactions for multi-qubit processors. By inducing coherent population exchange between adjacent qubits under frequency modulation, we implement a universal gate set for a linear array of four superconducting qubits. An average process fidelity of ℱ = 93% is estimated for three two-qubit gates via quantum process tomography. We establish the suitability of these techniques for computation by preparing a four-qubit maximally entangled state and comparing the estimated state fidelity with the expected performance of the individual entangling gates. In addition, we prepare an eight-qubit register in all possible bitstring permutations and monitor the fidelity of a two-qubit gate across one pair of these qubits. Across all these permutations, an average fidelity of ℱ = 91.6 ± 2.6% is observed. These results thus offer a path to a scalable architecture with high selectivity and low cross-talk. PMID:29423443

  4. Problems associated with estimating ground water discharge and recharge from stream-discharge records

    USGS Publications Warehouse

    Halford, K.J.; Mayer, G.C.

    2000-01-01

    Ground water discharge and recharge frequently have been estimated with hydrograph-separation techniques, but the critical assumptions of the techniques have not been investigated. The critical assumptions are that the hydraulic characteristics of the contributing aquifer (recession index) can be estimated from stream-discharge records; that periods of exclusively ground water discharge can be reliably identified; and that stream-discharge peaks approximate the magnitude and tinting of recharge events. The first assumption was tested by estimating the recession index from st earn-discharge hydrographs, ground water hydrographs, and hydraulic diffusivity estimates from aquifer tests in basins throughout the eastern United States and Montana. The recession index frequently could not be estimated reliably from stream-discharge records alone because many of the estimates of the recession index were greater than 1000 days. The ratio of stream discharge during baseflow periods was two to 36 times greater than the maximum expected range of ground water discharge at 12 of the 13 field sites. The identification of the ground water component of stream-discharge records was ambiguous because drainage from bank-storage, wetlands, surface water bodies, soils, and snowpacks frequently exceeded ground water discharge and also decreased exponentially during recession periods. The timing and magnitude of recharge events could not be ascertained from stream-discharge records at any of the sites investigated because recharge events were not directly correlated with stream peaks. When used alone, the recession-curve-displacement method and other hydrograph-separation techniques are poor tools for estimating ground water discharge or recharge because the major assumptions of the methods are commonly and grossly violated. Multiple, alternative methods of estimating ground water discharge and recharge should be used because of the uncertainty associated with any one technique.

  5. Noise parameter estimation for poisson corrupted images using variance stabilization transforms.

    PubMed

    Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo

    2014-03-01

    Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods.

  6. Design of an active helicopter control experiment at the Princeton Rotorcraft Dynamics Laboratory

    NASA Technical Reports Server (NTRS)

    Marraffa, Andrew M.; Mckillip, R. M., Jr.

    1989-01-01

    In an effort to develop an active control technique for reducing helicopter vibrations stemming from the main rotor system, a helicopter model was designed and tested at the Princeton Rotorcraft Dynamics Laboratory (PRDL). A description of this facility, including its latest data acquisition upgrade, are given. The design procedures for the test model and its Froude scaled rotor system are also discussed. The approach for performing active control is based on the idea that rotor states can be identified by instrumenting the rotor blades. Using this knowledge, Individual Blade Control (IBC) or Higher Harmonic Control (HHC) pitch input commands may be used to impact on rotor dynamics in such a way as to reduce rotor vibrations. Discussed here is an instrumentation configuration utilizing miniature accelerometers to measure and estimate first and second out-of-plane bending mode positions and velocities. To verify this technique, the model was tested, and resulting data were used to estimate rotor states as well as flap and bending coefficients, procedures for which are discussed. Overall results show that a cost- and time-effective method for building a useful test model for future active control experiments was developed. With some fine-tuning or slight adjustments in sensor configuration, prospects for obtaining good state estimates look promising.

  7. Water use in Georgia by county for 2005; and water-use trends, 1980-2005

    USGS Publications Warehouse

    Fanning, Julia L.; Trent, Victoria P.

    2009-01-01

    Consumptive water use was determined for each category of use and compiled for each county. Estimation techniques vary for each water-use category. While consumptive use varied for each county in 2005, from about 1 percent to nearly 100 percent of total withdrawals, consumptive-use estimates for the entire State totaled 1,310 Mgal/d, about 24 percent of total withdrawals.

  8. Improving Upon String Methods for Transition State Discovery.

    PubMed

    Chaffey-Millar, Hugh; Nikodem, Astrid; Matveev, Alexei V; Krüger, Sven; Rösch, Notker

    2012-02-14

    Transition state discovery via application of string methods has been researched on two fronts. The first front involves development of a new string method, named the Searching String method, while the second one aims at estimating transition states from a discretized reaction path. The Searching String method has been benchmarked against a number of previously existing string methods and the Nudged Elastic Band method. The developed methods have led to a reduction in the number of gradient calls required to optimize a transition state, as compared to existing methods. The Searching String method reported here places new beads on a reaction pathway at the midpoint between existing beads, such that the resolution of the path discretization in the region containing the transition state grows exponentially with the number of beads. This approach leads to favorable convergence behavior and generates more accurate estimates of transition states from which convergence to the final transition states occurs more readily. Several techniques for generating improved estimates of transition states from a converged string or nudged elastic band have been developed and benchmarked on 13 chemical test cases. Optimization approaches for string methods, and pitfalls therein, are discussed.

  9. An experimental case study to estimate Pre-harvest Wheat Acreage/Production in Hilly and Plain region of Uttarakhand state: Challenges and solutions of problems by using satellite data

    NASA Astrophysics Data System (ADS)

    Uniyal, D.; Kimothi, M. M.; Bhagya, N.; Ram, R. D.; Patel, N. K.; Dhaundiya, V. K.

    2014-11-01

    Wheat is an economically important Rabi crop for the state, which is grown on around 26 % of total available agriculture area in the state. There is a variation in productivity of wheat crop in hilly and tarai region. The agricultural productivity is less in hilly region in comparison of tarai region due to terrace cultivation, traditional system of agriculture, small land holdings, variation in physiography, top soil erosion, lack of proper irrigation system etc. Pre-harvest acreage/yield/production estimation of major crops is being done with the help of conventional crop cutting method, which is biased, inaccurate and time consuming. Remote Sensing data with multi-temporal and multi-spectral capabilities has shown new dimension in crop discrimination analysis and acreage/yield/production estimation in recent years. In view of this, Uttarakhand Space Applications Centre (USAC), Dehradun with the collaboration of Space Applications Centre (SAC), ISRO, Ahmedabad and Uttarakhand State Agriculture Department, have developed different techniques for the discrimination of crops and estimation of pre-harvest wheat acreage/yield/production. In the 1st phase, five districts (Dehradun, Almora, Udham Singh Nagar, Pauri Garhwal and Haridwar) with distinct physiography i.e. hilly and plain regions, have been selected for testing and verification of techniques using IRS (Indian Remote Sensing Satellites), LISS-III, LISS-IV satellite data of Rabi season for the year 2008-09 and whole 13 districts of the Uttarakhand state from 2009-14 along with ground data were used for detailed analysis. Five methods have been developed i.e. NDVI (Normalized Differential Vegetation Index), Supervised classification, Spatial modeling, Masking out method and Programming on visual basics methods using multitemporal satellite data of Rabi season along with the collateral and ground data. These methods were used for wheat discriminations and preharvest acreage estimations and subsequently results were compared with Bureau of Estimation Statistics (BES). Out of these five different methods, wheat area that was estimated by spatial modeling and programming on visual basics has been found quite near to Bureau of Estimation Statistics (BES). But for hilly region, maximum fields were going in shadow region, so it was difficult to estimate accurate result, so frequency distribution curve method has been used and frequency range has been decided to discriminate wheat pixels from other pixels in hilly region, digitized those regions and result shows good result. For yield estimation, an algorithm has been developed by using soil characteristics i.e. texture, depth, drainage, temperature, rainfall and historical yield data. To get the production estimation, estimated yield multiplied by acreage of crop per hectare. Result shows deviation for acreage estimation from BES is around 3.28 %, 2.46 %, 3.45 %, 1.56 %, 1.2 % and 1.6 % (estimation not declared till now by state Agriculture dept. For the year 2013-14) estimation and deviation for production estimation is around 4.98 %, 3.66 % 3.21 % , 3.1 % NA and 2.9 % for the consecutive above mentioned years i.e. 2008-09, 2009-10, 2010-11, 2011-12, 2012-13 and 2013-14. The estimated data has been provided to State Agriculture department for their use. To forecast production before harvest facilitate the formulation of workable marketing strategies leading to better export/import of crop in the state, which will help to lead better economic condition of the state. Yield estimation would help agriculture department in assessment of productivity of land for specific crop. Pre-harvest wheat acreage/production estimation, is useful to facilitate the reliable and timely estimates and enable the administrators and planners to take strategic decisions on import-export policy matters and trade negotiations.

  10. Isotopic Techniques for Assessment of Groundwater Discharge to the Coastal Ocean

    DTIC Science & Technology

    2002-09-30

    of the groundwater tracer. This may then be divided by the estimated groundwater Ra concentration to derive a water flux. 3...residence times of coastal waters . If one assumes that the source of short-lived radium isotopes is groundwater with a constant isotopic composition...Isotopic Techniques for Assessment of Groundwater Discharge to the Coastal Ocean William C. Burnett Department of Oceanography Florida State

  11. Comparison of Irrigation Water Use Estimates Calculated from Remotely Sensed Irrigated Acres and State Reported Irrigated Acres in the Lake Altus Drainage Basin, Oklahoma and Texas, 2000 Growing Season

    USGS Publications Warehouse

    Masoner, J.R.; Mladinich, C.S.; Konduris, A.M.; Smith, S. Jerrod

    2003-01-01

    Increased demand for water in the Lake Altus drainage basin requires more accurate estimates of water use for irrigation. The U.S. Geological Survey, in cooperation with the U.S. Bureau of Reclamation, is investigating new techniques to improve water-use estimates for irrigation purposes in the Lake Altus drainage basin. Empirical estimates of reference evapotranspiration, crop evapotranspiration, and crop irrigation water requirements for nine major crops were calculated from September 1999 to October 2000 using a solar radiation-based evapotranspiration model. Estimates of irrigation water use were calculated using remotely sensed irrigated crop acres derived from Landsat 7 Enhanced Thematic Mapper Plus imagery and were compared with irrigation water-use estimates calculated from irrigated crop acres reported by the Oklahoma Water Resources Board and the Texas Water Development Board for the 2000 growing season. The techniques presented will help manage water resources in the Lake Altus drainage basin and may be transferable to other areas with similar water management needs. Irrigation water use calculated from the remotely sensed irrigated acres was estimated at 154,920 acre-feet; whereas, irrigation water use calculated from state reported irrigated crop acres was 196,026 acre-feet, a 23 percent difference. The greatest difference in irrigation water use was in Carson County, Texas. Irrigation water use for Carson County, Texas, calculated from the remotely sensed irrigated acres was 58,555 acrefeet; whereas, irrigation water use calculated from state reported irrigated acres was 138,180 acre-feet, an 81 percent difference. The second greatest difference in irrigation water use occurred in Beckham County, Oklahoma. Differences between the two irrigation water use estimates are due to the differences of irrigated crop acres derived from the mapping process and those reported by the Oklahoma Water Resources Board and Texas Water Development Board.

  12. Valuing improved wetland quality using choice modeling

    NASA Astrophysics Data System (ADS)

    Morrison, Mark; Bennett, Jeff; Blamey, Russell

    1999-09-01

    The main stated preference technique used for estimating environmental values is the contingent valuation method. In this paper the results of an application of an alternative technique, choice modeling, are reported. Choice modeling has been developed in the marketing and transport applications but has only been used in a handful of environmental applications, most of which have focused on use values. The case study presented here involves the estimation of the nonuse environmental values provided by the Macquarie Marshes, a major wetland in New South Wales, Australia. Estimates of the nonuse value the community places on preventing job losses are also presented. The reported models are robust, having high explanatory power and variables that are statistically significant and consistent with expectations. These results provide support for the hypothesis that choice modeling can be used to estimate nonuse values for both environmental and social consequences of resource use changes.

  13. Interacting multiple model forward filtering and backward smoothing for maneuvering target tracking

    NASA Astrophysics Data System (ADS)

    Nandakumaran, N.; Sutharsan, S.; Tharmarasa, R.; Lang, Tom; McDonald, Mike; Kirubarajan, T.

    2009-08-01

    The Interacting Multiple Model (IMM) estimator has been proven to be effective in tracking agile targets. Smoothing or retrodiction, which uses measurements beyond the current estimation time, provides better estimates of target states. Various methods have been proposed for multiple model smoothing in the literature. In this paper, a new smoothing method, which involves forward filtering followed by backward smoothing while maintaining the fundamental spirit of the IMM, is proposed. The forward filtering is performed using the standard IMM recursion, while the backward smoothing is performed using a novel interacting smoothing recursion. This backward recursion mimics the IMM estimator in the backward direction, where each mode conditioned smoother uses standard Kalman smoothing recursion. Resulting algorithm provides improved but delayed estimates of target states. Simulation studies are performed to demonstrate the improved performance with a maneuvering target scenario. The comparison with existing methods confirms the improved smoothing accuracy. This improvement results from avoiding the augmented state vector used by other algorithms. In addition, the new technique to account for model switching in smoothing is a key in improving the performance.

  14. Comparative assessment of techniques for initial pose estimation using monocular vision

    NASA Astrophysics Data System (ADS)

    Sharma, Sumant; D`Amico, Simone

    2016-06-01

    This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.

  15. Modeling, estimation and identification methods for static shape determination of flexible structures. [for large space structure design

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1986-01-01

    This paper outlines methods for modeling, identification and estimation for static determination of flexible structures. The shape estimation schemes are based on structural models specified by (possibly interconnected) elliptic partial differential equations. The identification techniques provide approximate knowledge of parameters in elliptic systems. The techniques are based on the method of maximum-likelihood that finds parameter values such that the likelihood functional associated with the system model is maximized. The estimation methods are obtained by means of a function-space approach that seeks to obtain the conditional mean of the state given the data and a white noise characterization of model errors. The solutions are obtained in a batch-processing mode in which all the data is processed simultaneously. After methods for computing the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the related estimation error is conducted. In addition to outlining the above theoretical results, the paper presents typical flexible structure simulations illustrating performance of the shape determination methods.

  16. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described by rate constants. These problems are isomorphic with chemical kinetics problems. Recently, several efficient techniques for this purpose have been developed based on the approach originally proposed by Gillespie. Although the utility of the techniques mentioned above for Bayesian problems has not been determined, further research along these lines is warranted

  17. Streamflow characterization using functional data analysis of the Potomac River

    NASA Astrophysics Data System (ADS)

    Zelmanow, A.; Maslova, I.; Ticlavilca, A. M.; McKee, M.

    2013-12-01

    Flooding and droughts are extreme hydrological events that affect the United States economically and socially. The severity and unpredictability of flooding has caused billions of dollars in damage and the loss of lives in the eastern United States. In this context, there is an urgent need to build a firm scientific basis for adaptation by developing and applying new modeling techniques for accurate streamflow characterization and reliable hydrological forecasting. The goal of this analysis is to use numerical streamflow characteristics in order to classify, model, and estimate the likelihood of extreme events in the eastern United States, mainly the Potomac River. Functional data analysis techniques are used to study yearly streamflow patterns, with the extreme streamflow events characterized via functional principal component analysis. These methods are merged with more classical techniques such as cluster analysis, classification analysis, and time series modeling. The developed functional data analysis approach is used to model continuous streamflow hydrographs. The forecasting potential of this technique is explored by incorporating climate factors to produce a yearly streamflow outlook.

  18. Carrier Estimation Using Classic Spectral Estimation Techniques for the Proposed Demand Assignment Multiple Access Service

    NASA Technical Reports Server (NTRS)

    Scaife, Bradley James

    1999-01-01

    In any satellite communication, the Doppler shift associated with the satellite's position and velocity must be calculated in order to determine the carrier frequency. If the satellite state vector is unknown then some estimate must be formed of the Doppler-shifted carrier frequency. One elementary technique is to examine the signal spectrum and base the estimate on the dominant spectral component. If, however, the carrier is spread (as in most satellite communications) this technique may fail unless the chip rate-to-data rate ratio (processing gain) associated with the carrier is small. In this case, there may be enough spectral energy to allow peak detection against a noise background. In this thesis, we present a method to estimate the frequency (without knowledge of the Doppler shift) of a spread-spectrum carrier assuming a small processing gain and binary-phase shift keying (BPSK) modulation. Our method relies on an averaged discrete Fourier transform along with peak detection on spectral match filtered data. We provide theory and simulation results indicating the accuracy of this method. In addition, we will describe an all-digital hardware design based around a Motorola DSP56303 and high-speed A/D which implements this technique in real-time. The hardware design is to be used in NMSU's implementation of NASA's demand assignment, multiple access (DAMA) service.

  19. The measurement of linear frequency drift in oscillators

    NASA Astrophysics Data System (ADS)

    Barnes, J. A.

    1985-04-01

    A linear drift in frequency is an important element in most stochastic models of oscillator performance. Quartz crystal oscillators often have drifts in excess of a part in ten to the tenth power per day. Even commercial cesium beam devices often show drifts of a few parts in ten to the thirteenth per year. There are many ways to estimate the drift rates from data samples (e.g., regress the phase on a quadratic; regress the frequency on a linear; compute the simple mean of the first difference of frequency; use Kalman filters with a drift term as one element in the state vector; and others). Although most of these estimators are unbiased, they vary in efficiency (i.e., confidence intervals). Further, the estimation of confidence intervals using the standard analysis of variance (typically associated with the specific estimating technique) can give amazingly optimistic results. The source of these problems is not an error in, say, the regressions techniques, but rather the problems arise from correlations within the residuals. That is, the oscillator model is often not consistent with constraints on the analysis technique or, in other words, some specific analysis techniques are often inappropriate for the task at hand. The appropriateness of a specific analysis technique is critically dependent on the oscillator model and can often be checked with a simple whiteness test on the residuals.

  20. Evaluation of calibration accuracy of magnetometer sensors of Aist small spacecraft

    NASA Astrophysics Data System (ADS)

    Sedelnikov, A. V.; Filippov, A. S.; Gorozhakina, A. S.

    2018-05-01

    In the paper the technique of estimation of calibration accuracy of magnetometer gauges by the example of an Aist small spacecraft is stated. According to the measurement of the Earth's magnetic field in the orbital flight of a small spacecraft, the parameters of its rotational motion around the center of mass are estimated and primary information is generated for the magnetic actuators of the orbital motion control system. Therefore, calibration of the magnetometer sensors at the ground test stage is essential for the successful execution of the flight program. The technique can be used at the stages of ground and flight tests of magnetic field measuring instruments.

  1. Assessing the Impact of Local Agency Traffic Safety Training Using Ethnographic Techniques

    ERIC Educational Resources Information Center

    Colling, Timothy K.

    2010-01-01

    Traffic crashes are a significant source of loss of life, personal injury and financial expense in the United States. In 2008 there were 37,261 people killed and an estimated 2,346,000 people injured nationwide in motor vehicle traffic crashes. State and federal agencies are beginning to focus traffic safety improvement effort on local agency…

  2. Theory and Techniques for Assessing the Demand and Supply of Outdoor Recreation in the United States

    Treesearch

    H. Ken Cordell; John C. Bergstrom

    1989-01-01

    As the central analysis for the 1989 Renewable Resources planning Act Assessment, a household market model covering 37 recreational activities was computed for the United States. Equilibrium consumption and costs were estimated, as were likely future changes in consumption and costs in response to expected demand growth and alternative development and access policies...

  3. Proof of Concept for an Approach to a Finer Resolution Inventory

    Treesearch

    Chris J. Cieszewski; Kim Iles; Roger C. Lowe; Michal Zasada

    2005-01-01

    This report presents a proof of concept for a statistical framework to develop a timely, accurate, and unbiased fiber supply assessment in the State of Georgia, U.S.A. The proposed approach is based on using various data sources and modeling techniques to calibrate satellite image-based statewide stand lists, which provide initial estimates for a State inventory on a...

  4. The relative density of forests in the United States

    Treesearch

    Christopher W. Woodall; Charles H. Perry; Patrick D. Miles

    2006-01-01

    A relative stand density assessment technique, using the mean specific gravity of all trees in a stand to predict its maximum stand density index (SDI) and subsequently its relative stand density (current SDI divided by maximum SDI), was used to estimate the relative density of forests across the United States using a national-scale forest inventory. Live tree biomass...

  5. Evaluation of Methods Used for Estimating Selected Streamflow Statistics, and Flood Frequency and Magnitude, for Small Basins in North Coastal California

    USGS Publications Warehouse

    Mann, Michael P.; Rizzardo, Jule; Satkowski, Richard

    2004-01-01

    Accurate streamflow statistics are essential to water resource agencies involved in both science and decision-making. When long-term streamflow data are lacking at a site, estimation techniques are often employed to generate streamflow statistics. However, procedures for accurately estimating streamflow statistics often are lacking. When estimation procedures are developed, they often are not evaluated properly before being applied. Use of unevaluated or underevaluated flow-statistic estimation techniques can result in improper water-resources decision-making. The California State Water Resources Control Board (SWRCB) uses two key techniques, a modified rational equation and drainage basin area-ratio transfer, to estimate streamflow statistics at ungaged locations. These techniques have been implemented to varying degrees, but have not been formally evaluated. For estimating peak flows at the 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals, the SWRCB uses the U.S. Geological Surveys (USGS) regional peak-flow equations. In this study, done cooperatively by the USGS and SWRCB, the SWRCB estimated several flow statistics at 40 USGS streamflow gaging stations in the north coast region of California. The SWRCB estimates were made without reference to USGS flow data. The USGS used the streamflow data provided by the 40 stations to generate flow statistics that could be compared with SWRCB estimates for accuracy. While some SWRCB estimates compared favorably with USGS statistics, results were subject to varying degrees of error over the region. Flow-based estimation techniques generally performed better than rain-based methods, especially for estimation of December 15 to March 31 mean daily flows. The USGS peak-flow equations also performed well, but tended to underestimate peak flows. The USGS equations performed within reported error bounds, but will require updating in the future as peak-flow data sets grow larger. Little correlation was discovered between estimation errors and geographic locations or various basin characteristics. However, for 25-percentile year mean-daily-flow estimates for December 15 to March 31, the greatest estimation errors were at east San Francisco Bay area stations with mean annual precipitation less than or equal to 30 inches, and estimated 2-year/24-hour rainfall intensity less than 3 inches.

  6. ERTS-1 data user investigation of wetlands ecology

    NASA Technical Reports Server (NTRS)

    Anderson, R. R. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. ERTS-1 imagery (enlarged to 1:250,000) is an excellent tool by which large area coastal marshland mapping may be undertaken. If states can sacrifice some accuracy (amount unknown at this time) in placing of boundary lines, the technique may be used to do the following: (1) estimate extent of man's impact on marshes by ditching and lagooning and accelerated successional trends; (2) place boundaries between wetland and upland and hence estimate amount of coastal marshland remaining in the state; (3) distinguish among relatively large zones of various plant species including high and low growth S. alterniflora, J. roemerianus, and S. cynosuroides; and (4) estimate marsh plant species productivity when ground based information is available.

  7. Estimation of steady-state leakage current in polycrystalline PZT thin films

    NASA Astrophysics Data System (ADS)

    Podgorny, Yury; Vorotilov, Konstantin; Sigov, Alexander

    2016-09-01

    Estimation of the steady state (or "true") leakage current Js in polycrystalline ferroelectric PZT films with the use of the voltage-step technique is discussed. Curie-von Schweidler (CvS) and sum of exponents (Σ exp ) models are studied for current-time J (t) data fitting. Σ exp model (sum of three or two exponents) gives better fitting characteristics and provides good accuracy of Js estimation at reduced measurement time thus making possible to avoid film degradation, whereas CvS model is very sensitive to both start and finish time points and give in many cases incorrect results. The results give rise to suggest an existence of low-frequency relaxation processes in PZT films with characteristic duration of tens and hundreds of seconds.

  8. Localised estimates and spatial mapping of poverty incidence in the state of Bihar in India—An application of small area estimation techniques

    PubMed Central

    Aditya, Kaustav; Sud, U. C.

    2018-01-01

    Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011–12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable. PMID:29879202

  9. Localised estimates and spatial mapping of poverty incidence in the state of Bihar in India-An application of small area estimation techniques.

    PubMed

    Chandra, Hukum; Aditya, Kaustav; Sud, U C

    2018-01-01

    Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011-12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable.

  10. Hyper-X Post-Flight Trajectory Reconstruction

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Tartabini, Paul V.; Blanchard, RobertC.; Kirsch, Michael; Toniolo, Matthew D.

    2004-01-01

    This paper discusses the formulation and development of a trajectory reconstruction tool for the NASA X{43A/Hyper{X high speed research vehicle, and its implementation for the reconstruction and analysis of ight test data. Extended Kalman ltering techniques are employed to reconstruct the trajectory of the vehicle, based upon numerical integration of inertial measurement data along with redundant measurements of the vehicle state. The equations of motion are formulated in order to include the effects of several systematic error sources, whose values may also be estimated by the ltering routines. Additionally, smoothing algorithms have been implemented in which the nal value of the state (or an augmented state that includes other systematic error parameters to be estimated) and covariance are propagated back to the initial time to generate the best-estimated trajectory, based upon all available data. The methods are applied to the problem of reconstructing the trajectory of the Hyper-X vehicle from ight data.

  11. Entropy-based adaptive attitude estimation

    NASA Astrophysics Data System (ADS)

    Kiani, Maryam; Barzegar, Aylin; Pourtakdoust, Seid H.

    2018-03-01

    Gaussian approximation filters have increasingly been developed to enhance the accuracy of attitude estimation in space missions. The effective employment of these algorithms demands accurate knowledge of system dynamics and measurement models, as well as their noise characteristics, which are usually unavailable or unreliable. An innovation-based adaptive filtering approach has been adopted as a solution to this problem; however, it exhibits two major challenges, namely appropriate window size selection and guaranteed assurance of positive definiteness for the estimated noise covariance matrices. The current work presents two novel techniques based on relative entropy and confidence level concepts in order to address the abovementioned drawbacks. The proposed adaptation techniques are applied to two nonlinear state estimation algorithms of the extended Kalman filter and cubature Kalman filter for attitude estimation of a low earth orbit satellite equipped with three-axis magnetometers and Sun sensors. The effectiveness of the proposed adaptation scheme is demonstrated by means of comprehensive sensitivity analysis on the system and environmental parameters by using extensive independent Monte Carlo simulations.

  12. An Adaptive Kalman Filter using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  13. The National Flood Frequency Program, version 3 : a computer program for estimating magnitude and frequency of floods for ungaged sites

    USGS Publications Warehouse

    Ries, Kernell G.; Crouse, Michele Y.

    2002-01-01

    For many years, the U.S. Geological Survey (USGS) has been developing regional regression equations for estimating flood magnitude and frequency at ungaged sites. These regression equations are used to transfer flood characteristics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally, these equations have been developed on a Statewide or metropolitan-area basis as part of cooperative study programs with specific State Departments of Transportation. In 1994, the USGS released a computer program titled the National Flood Frequency Program (NFF), which compiled all the USGS available regression equations for estimating the magnitude and frequency of floods in the United States and Puerto Rico. NFF was developed in cooperation with the Federal Highway Administration and the Federal Emergency Management Agency. Since the initial release of NFF, the USGS has produced new equations for many areas of the Nation. A new version of NFF has been developed that incorporates these new equations and provides additional functionality and ease of use. NFF version 3 provides regression-equation estimates of flood-peak discharges for unregulated rural and urban watersheds, flood-frequency plots, and plots of typical flood hydrographs for selected recurrence intervals. The Program also provides weighting techniques to improve estimates of flood-peak discharges for gaging stations and ungaged sites. The information provided by NFF should be useful to engineers and hydrologists for planning and design applications. This report describes the flood-regionalization techniques used in NFF and provides guidance on the applicability and limitations of the techniques. The NFF software and the documentation for the regression equations included in NFF are available at http://water.usgs.gov/software/nff.html.

  14. Online Kinematic and Dynamic-State Estimation for Constrained Multibody Systems Based on IMUs

    PubMed Central

    Torres-Moreno, José Luis; Blanco-Claraco, José Luis; Giménez-Fernández, Antonio; Sanjurjo, Emilio; Naya, Miguel Ángel

    2016-01-01

    This article addresses the problems of online estimations of kinematic and dynamic states of a mechanism from a sequence of noisy measurements. In particular, we focus on a planar four-bar linkage equipped with inertial measurement units (IMUs). Firstly, we describe how the position, velocity, and acceleration of all parts of the mechanism can be derived from IMU signals by means of multibody kinematics. Next, we propose the novel idea of integrating the generic multibody dynamic equations into two variants of Kalman filtering, i.e., the extended Kalman filter (EKF) and the unscented Kalman filter (UKF), in a way that enables us to handle closed-loop, constrained mechanisms, whose state space variables are not independent and would normally prevent the direct use of such estimators. The proposal in this work is to apply those estimators over the manifolds of allowed positions and velocities, by means of estimating a subset of independent coordinates only. The proposed techniques are experimentally validated on a testbed equipped with encoders as a means of establishing the ground-truth. Estimators are run online in real-time, a feature not matched by any previous procedure of those reported in the literature on multibody dynamics. PMID:26959027

  15. Variational optical flow estimation based on stick tensor voting.

    PubMed

    Rashwan, Hatem A; Garcia, Miguel A; Puig, Domenec

    2013-07-01

    Variational optical flow techniques allow the estimation of flow fields from spatio-temporal derivatives. They are based on minimizing a functional that contains a data term and a regularization term. Recently, numerous approaches have been presented for improving the accuracy of the estimated flow fields. Among them, tensor voting has been shown to be particularly effective in the preservation of flow discontinuities. This paper presents an adaptation of the data term by using anisotropic stick tensor voting in order to gain robustness against noise and outliers with significantly lower computational cost than (full) tensor voting. In addition, an anisotropic complementary smoothness term depending on directional information estimated through stick tensor voting is utilized in order to preserve discontinuity capabilities of the estimated flow fields. Finally, a weighted non-local term that depends on both the estimated directional information and the occlusion state of pixels is integrated during the optimization process in order to denoise the final flow field. The proposed approach yields state-of-the-art results on the Middlebury benchmark.

  16. A Full Dynamic Compound Inverse Method for output-only element-level system identification and input estimation from earthquake response signals

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Rizzi, Egidio

    2016-08-01

    This paper proposes a new output-only element-level system identification and input estimation technique, towards the simultaneous identification of modal parameters, input excitation time history and structural features at the element-level by adopting earthquake-induced structural response signals. The method, named Full Dynamic Compound Inverse Method (FDCIM), releases strong assumptions of earlier element-level techniques, by working with a two-stage iterative algorithm. Jointly, a Statistical Average technique, a modification process and a parameter projection strategy are adopted at each stage to achieve stronger convergence for the identified estimates. The proposed method works in a deterministic way and is completely developed in State-Space form. Further, it does not require continuous- to discrete-time transformations and does not depend on initialization conditions. Synthetic earthquake-induced response signals from different shear-type buildings are generated to validate the implemented procedure, also with noise-corrupted cases. The achieved results provide a necessary condition to demonstrate the effectiveness of the proposed identification method.

  17. A low tritium hydride bed inventory estimation technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, J.E.; Shanahan, K.L.; Baker, R.A.

    2015-03-15

    Low tritium hydride beds were developed and deployed into tritium service in Savannah River Site. Process beds to be used for low concentration tritium gas were not fitted with instrumentation to perform the steady-state, flowing gas calorimetric inventory measurement method. Low tritium beds contain less than the detection limit of the IBA (In-Bed Accountability) technique used for tritium inventory. This paper describes two techniques for estimating tritium content and uncertainty for low tritium content beds to be used in the facility's physical inventory (PI). PI are performed periodically to assess the quantity of nuclear material used in a facility. Themore » first approach (Mid-point approximation method - MPA) assumes the bed is half-full and uses a gas composition measurement to estimate the tritium inventory and uncertainty. The second approach utilizes the bed's hydride material pressure-composition-temperature (PCT) properties and a gas composition measurement to reduce the uncertainty in the calculated bed inventory.« less

  18. Efficient data assimilation algorithm for bathymetry application

    NASA Astrophysics Data System (ADS)

    Ghorbanidehno, H.; Lee, J. H.; Farthing, M.; Hesser, T.; Kitanidis, P. K.; Darve, E. F.

    2017-12-01

    Information on the evolving state of the nearshore zone bathymetry is crucial to shoreline management, recreational safety, and naval operations. The high cost and complex logistics of using ship-based surveys for bathymetry estimation have encouraged the use of remote sensing techniques. Data assimilation methods combine the remote sensing data and nearshore hydrodynamic models to estimate the unknown bathymetry and the corresponding uncertainties. In particular, several recent efforts have combined Kalman Filter-based techniques such as ensembled-based Kalman filters with indirect video-based observations to address the bathymetry inversion problem. However, these methods often suffer from ensemble collapse and uncertainty underestimation. Here, the Compressed State Kalman Filter (CSKF) method is used to estimate the bathymetry based on observed wave celerity. In order to demonstrate the accuracy and robustness of the CSKF method, we consider twin tests with synthetic observations of wave celerity, while the bathymetry profiles are chosen based on surveys taken by the U.S. Army Corps of Engineer Field Research Facility (FRF) in Duck, NC. The first test case is a bathymetry estimation problem for a spatially smooth and temporally constant bathymetry profile. The second test case is a bathymetry estimation problem for a temporally evolving bathymetry from a smooth to a non-smooth profile. For both problems, we compare the results of CSKF with those obtained by the local ensemble transform Kalman filter (LETKF), which is a popular ensemble-based Kalman filter method.

  19. Self-rated health: small area large area comparisons amongst older adults at the state, district and sub-district level in India.

    PubMed

    Hirve, Siddhivinayak; Vounatsou, Penelope; Juvekar, Sanjay; Blomstedt, Yulia; Wall, Stig; Chatterji, Somnath; Ng, Nawi

    2014-03-01

    We compared prevalence estimates of self-rated health (SRH) derived indirectly using four different small area estimation methods for the Vadu (small) area from the national Study on Global AGEing (SAGE) survey with estimates derived directly from the Vadu SAGE survey. The indirect synthetic estimate for Vadu was 24% whereas the model based estimates were 45.6% and 45.7% with smaller prediction errors and comparable to the direct survey estimate of 50%. The model based techniques were better suited to estimate the prevalence of SRH than the indirect synthetic method. We conclude that a simplified mixed effects regression model can produce valid small area estimates of SRH. © 2013 Published by Elsevier Ltd.

  20. Techniques for Estimating the Magnitude and Frequency of Peak Flows on Small Streams in Minnesota Based on Data through Water Year 2005

    USGS Publications Warehouse

    Lorenz, David L.; Sanocki, Chris A.; Kocian, Matthew J.

    2010-01-01

    Knowledge of the peak flow of floods of a given recurrence interval is essential for regulation and planning of water resources and for design of bridges, culverts, and dams along Minnesota's rivers and streams. Statistical techniques are needed to estimate peak flow at ungaged sites because long-term streamflow records are available at relatively few places. Because of the need to have up-to-date peak-flow frequency information in order to estimate peak flows at ungaged sites, the U.S. Geological Survey (USGS) conducted a peak-flow frequency study in cooperation with the Minnesota Department of Transportation and the Minnesota Pollution Control Agency. Estimates of peak-flow magnitudes for 1.5-, 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals are presented for 330 streamflow-gaging stations in Minnesota and adjacent areas in Iowa and South Dakota based on data through water year 2005. The peak-flow frequency information was subsequently used in regression analyses to develop equations relating peak flows for selected recurrence intervals to various basin and climatic characteristics. Two statistically derived techniques-regional regression equation and region of influence regression-can be used to estimate peak flow on ungaged streams smaller than 3,000 square miles in Minnesota. Regional regression equations were developed for selected recurrence intervals in each of six regions in Minnesota: A (northwestern), B (north central and east central), C (northeastern), D (west central and south central), E (southwestern), and F (southeastern). The regression equations can be used to estimate peak flows at ungaged sites. The region of influence regression technique dynamically selects streamflow-gaging stations with characteristics similar to a site of interest. Thus, the region of influence regression technique allows use of a potentially unique set of gaging stations for estimating peak flow at each site of interest. Two methods of selecting streamflow-gaging stations, similarity and proximity, can be used for the region of influence regression technique. The regional regression equation technique is the preferred technique as an estimate of peak flow in all six regions for ungaged sites. The region of influence regression technique is not appropriate for regions C, E, and F because the interrelations of some characteristics of those regions do not agree with the interrelations throughout the rest of the State. Both the similarity and proximity methods for the region of influence technique can be used in the other regions (A, B, and D) to provide additional estimates of peak flow. The peak-flow-frequency estimates and basin characteristics for selected streamflow-gaging stations and regional peak-flow regression equations are included in this report.

  1. Reconciling Estimates of Cell Proliferation from Stable Isotope Labeling Experiments

    PubMed Central

    Drylewicz, Julia; Elemans, Marjet; Zhang, Yan; Kelly, Elizabeth; Reljic, Rajko; Tesselaar, Kiki; de Boer, Rob J.; Macallan, Derek C.; Borghans, José A. M.; Asquith, Becca

    2015-01-01

    Stable isotope labeling is the state of the art technique for in vivo quantification of lymphocyte kinetics in humans. It has been central to a number of seminal studies, particularly in the context of HIV-1 and leukemia. However, there is a significant discrepancy between lymphocyte proliferation rates estimated in different studies. Notably, deuterated 2H2-glucose (D2-glucose) labeling studies consistently yield higher estimates of proliferation than deuterated water (D2O) labeling studies. This hampers our understanding of immune function and undermines our confidence in this important technique. Whether these differences are caused by fundamental biochemical differences between the two compounds and/or by methodological differences in the studies is unknown. D2-glucose and D2O labeling experiments have never been performed by the same group under the same experimental conditions; consequently a direct comparison of these two techniques has not been possible. We sought to address this problem. We performed both in vitro and murine in vivo labeling experiments using identical protocols with both D2-glucose and D2O. This showed that intrinsic differences between the two compounds do not cause differences in the proliferation rate estimates, but that estimates made using D2-glucose in vivo were susceptible to difficulties in normalization due to highly variable blood glucose enrichment. Analysis of three published human studies made using D2-glucose and D2O confirmed this problem, particularly in the case of short term D2-glucose labeling. Correcting for these inaccuracies in normalization decreased proliferation rate estimates made using D2-glucose and slightly increased estimates made using D2O; thus bringing the estimates from the two methods significantly closer and highlighting the importance of reliable normalization when using this technique. PMID:26437372

  2. Data acquisition and path selection decision making for an autonomous roving vehicle

    NASA Technical Reports Server (NTRS)

    Frederick, D. K.; Shen, C. N.; Yerazunis, S. W.

    1976-01-01

    Problems related to the guidance of an autonomous rover for unmanned planetary exploration were investigated. Topics included in these studies were: simulation on an interactive graphics computer system of the Rapid Estimation Technique for detection of discrete obstacles; incorporation of a simultaneous Bayesian estimate of states and inputs in the Rapid Estimation Scheme; development of methods for estimating actual laser rangefinder errors and their application to date provided by Jet Propulsion Laboratory; and modification of a path selection system simulation computer code for evaluation of a hazard detection system based on laser rangefinder data.

  3. Compatibility check of measured aircraft responses using kinematic equations and extended Kalman filter

    NASA Technical Reports Server (NTRS)

    Klein, V.; Schiess, J. R.

    1977-01-01

    An extended Kalman filter smoother and a fixed point smoother were used for estimation of the state variables in the six degree of freedom kinematic equations relating measured aircraft responses and for estimation of unknown constant bias and scale factor errors in measured data. The computing algorithm includes an analysis of residuals which can improve the filter performance and provide estimates of measurement noise characteristics for some aircraft output variables. The technique developed was demonstrated using simulated and real flight test data. Improved accuracy of measured data was obtained when the data were corrected for estimated bias errors.

  4. Real-Time Impact Visualization Inspection of Aerospace Composite Structures with Distributed Sensors.

    PubMed

    Si, Liang; Baier, Horst

    2015-07-08

    For the future design of smart aerospace structures, the development and application of a reliable, real-time and automatic monitoring and diagnostic technique is essential. Thus, with distributed sensor networks, a real-time automatic structural health monitoring (SHM) technique is designed and investigated to monitor and predict the locations and force magnitudes of unforeseen foreign impacts on composite structures and to estimate in real time mode the structural state when impacts occur. The proposed smart impact visualization inspection (IVI) technique mainly consists of five functional modules, which are the signal data preprocessing (SDP), the forward model generator (FMG), the impact positioning calculator (IPC), the inverse model operator (IMO) and structural state estimator (SSE). With regard to the verification of the practicality of the proposed IVI technique, various structure configurations are considered, which are a normal CFRP panel and another CFRP panel with "orange peel" surfaces and a cutout hole. Additionally, since robustness against several background disturbances is also an essential criterion for practical engineering demands, investigations and experimental tests are carried out under random vibration interfering noise (RVIN) conditions. The accuracy of the predictions for unknown impact events on composite structures using the IVI technique is validated under various structure configurations and under changing environmental conditions. The evaluated errors all fall well within a satisfactory limit range. Furthermore, it is concluded that the IVI technique is applicable for impact monitoring, diagnosis and assessment of aerospace composite structures in complex practical engineering environments.

  5. Real-Time Impact Visualization Inspection of Aerospace Composite Structures with Distributed Sensors

    PubMed Central

    Si, Liang; Baier, Horst

    2015-01-01

    For the future design of smart aerospace structures, the development and application of a reliable, real-time and automatic monitoring and diagnostic technique is essential. Thus, with distributed sensor networks, a real-time automatic structural health monitoring (SHM) technique is designed and investigated to monitor and predict the locations and force magnitudes of unforeseen foreign impacts on composite structures and to estimate in real time mode the structural state when impacts occur. The proposed smart impact visualization inspection (IVI) technique mainly consists of five functional modules, which are the signal data preprocessing (SDP), the forward model generator (FMG), the impact positioning calculator (IPC), the inverse model operator (IMO) and structural state estimator (SSE). With regard to the verification of the practicality of the proposed IVI technique, various structure configurations are considered, which are a normal CFRP panel and another CFRP panel with “orange peel” surfaces and a cutout hole. Additionally, since robustness against several background disturbances is also an essential criterion for practical engineering demands, investigations and experimental tests are carried out under random vibration interfering noise (RVIN) conditions. The accuracy of the predictions for unknown impact events on composite structures using the IVI technique is validated under various structure configurations and under changing environmental conditions. The evaluated errors all fall well within a satisfactory limit range. Furthermore, it is concluded that the IVI technique is applicable for impact monitoring, diagnosis and assessment of aerospace composite structures in complex practical engineering environments. PMID:26184196

  6. Ozone data assimilation with GEOS-Chem: a comparison between 3-D-Var, 4-D-Var, and suboptimal Kalman filter approaches

    NASA Astrophysics Data System (ADS)

    Singh, K.; Sandu, A.; Bowman, K. W.; Parrington, M.; Jones, D. B. A.; Lee, M.

    2011-08-01

    Chemistry transport models determine the evolving chemical state of the atmosphere by solving the fundamental equations that govern physical and chemical transformations subject to initial conditions of the atmospheric state and surface boundary conditions, e.g., surface emissions. The development of data assimilation techniques synthesize model predictions with measurements in a rigorous mathematical framework that provides observational constraints on these conditions. Two families of data assimilation methods are currently widely used: variational and Kalman filter (KF). The variational approach is based on control theory and formulates data assimilation as a minimization problem of a cost functional that measures the model-observations mismatch. The Kalman filter approach is rooted in statistical estimation theory and provides the analysis covariance together with the best state estimate. Suboptimal Kalman filters employ different approximations of the covariances in order to make the computations feasible with large models. Each family of methods has both merits and drawbacks. This paper compares several data assimilation methods used for global chemical data assimilation. Specifically, we evaluate data assimilation approaches for improving estimates of the summertime global tropospheric ozone distribution in August 2006 based on ozone observations from the NASA Tropospheric Emission Spectrometer and the GEOS-Chem chemistry transport model. The resulting analyses are compared against independent ozonesonde measurements to assess the effectiveness of each assimilation method. All assimilation methods provide notable improvements over the free model simulations, which differ from the ozonesonde measurements by about 20 % (below 200 hPa). Four dimensional variational data assimilation with window lengths between five days and two weeks is the most accurate method, with mean differences between analysis profiles and ozonesonde measurements of 1-5 %. Two sequential assimilation approaches (three dimensional variational and suboptimal KF), although derived from different theoretical considerations, provide similar ozone estimates, with relative differences of 5-10 % between the analyses and ozonesonde measurements. Adjoint sensitivity analysis techniques are used to explore the role of of uncertainties in ozone precursors and their emissions on the distribution of tropospheric ozone. A novel technique is introduced that projects 3-D-Variational increments back to an equivalent initial condition, which facilitates comparison with 4-D variational techniques.

  7. An Information Retrieval Approach for Robust Prediction of Road Surface States.

    PubMed

    Park, Jae-Hyung; Kim, Kwanho

    2017-01-28

    Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods.

  8. An Information Retrieval Approach for Robust Prediction of Road Surface States

    PubMed Central

    Park, Jae-Hyung; Kim, Kwanho

    2017-01-01

    Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods. PMID:28134859

  9. Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories

    NASA Astrophysics Data System (ADS)

    Matsunaga, Y.; Sugita, Y.

    2018-06-01

    A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.

  10. A Nonlinear Dynamics-Based Estimator for Functional Electrical Stimulation: Preliminary Results From Lower-Leg Extension Experiments.

    PubMed

    Allen, Marcus; Zhong, Qiang; Kirsch, Nicholas; Dani, Ashwin; Clark, William W; Sharma, Nitin

    2017-12-01

    Miniature inertial measurement units (IMUs) are wearable sensors that measure limb segment or joint angles during dynamic movements. However, IMUs are generally prone to drift, external magnetic interference, and measurement noise. This paper presents a new class of nonlinear state estimation technique called state-dependent coefficient (SDC) estimation to accurately predict joint angles from IMU measurements. The SDC estimation method uses limb dynamics, instead of limb kinematics, to estimate the limb state. Importantly, the nonlinear limb dynamic model is formulated into state-dependent matrices that facilitate the estimator design without performing a Jacobian linearization. The estimation method is experimentally demonstrated to predict knee joint angle measurements during functional electrical stimulation of the quadriceps muscle. The nonlinear knee musculoskeletal model was identified through a series of experiments. The SDC estimator was then compared with an extended kalman filter (EKF), which uses a Jacobian linearization and a rotation matrix method, which uses a kinematic model instead of the dynamic model. Each estimator's performance was evaluated against the true value of the joint angle, which was measured through a rotary encoder. The experimental results showed that the SDC estimator, the rotation matrix method, and EKF had root mean square errors of 2.70°, 2.86°, and 4.42°, respectively. Our preliminary experimental results show the new estimator's advantage over the EKF method but a slight advantage over the rotation matrix method. However, the information from the dynamic model allows the SDC method to use only one IMU to measure the knee angle compared with the rotation matrix method that uses two IMUs to estimate the angle.

  11. Towards Unmanned Systems for Dismounted Operations in the Canadian Forces

    DTIC Science & Technology

    2011-01-01

    LIDAR , and RADAR) and lower power/mass, passive imaging techniques such as structure from motion and simultaneous localisation and mapping ( SLAM ...sensors and learning algorithms. 5.1.2 Simultaneous localisation and mapping SLAM algorithms concurrently estimate a robot pose and a map of unique...locations and vehicle pose are part of the SLAM state vector and are estimated in each update step. AISS developed a monocular camera-based SLAM

  12. Estimating the quadratic mean diameters of fine woody debris in forests of the United States

    Treesearch

    Christopher W. Woodall; Vicente J. Monleon

    2010-01-01

    Most fine woody debris (FWD) line-intersect sampling protocols and associated estimators require an approximation of the quadratic mean diameter (QMD) of each individual FWD size class. There is a lack of empirically derived QMDs by FWD size class and species/forest type across the U.S. The objective of this study is to evaluate a technique known as the graphical...

  13. Comparison of geostatistical interpolation and remote sensing techniques for estimating long-term exposure to ambient PM2.5 concentrations across the continental United States.

    PubMed

    Lee, Seung-Jae; Serre, Marc L; van Donkelaar, Aaron; Martin, Randall V; Burnett, Richard T; Jerrett, Michael

    2012-12-01

    A better understanding of the adverse health effects of chronic exposure to fine particulate matter (PM2.5) requires accurate estimates of PM2.5 variation at fine spatial scales. Remote sensing has emerged as an important means of estimating PM2.5 exposures, but relatively few studies have compared remote-sensing estimates to those derived from monitor-based data. We evaluated and compared the predictive capabilities of remote sensing and geostatistical interpolation. We developed a space-time geostatistical kriging model to predict PM2.5 over the continental United States and compared resulting predictions to estimates derived from satellite retrievals. The kriging estimate was more accurate for locations that were about 100 km from a monitoring station, whereas the remote sensing estimate was more accurate for locations that were > 100 km from a monitoring station. Based on this finding, we developed a hybrid map that combines the kriging and satellite-based PM2.5 estimates. We found that for most of the populated areas of the continental United States, geostatistical interpolation produced more accurate estimates than remote sensing. The differences between the estimates resulting from the two methods, however, were relatively small. In areas with extensive monitoring networks, the interpolation may provide more accurate estimates, but in the many areas of the world without such monitoring, remote sensing can provide useful exposure estimates that perform nearly as well.

  14. Use of Advanced Machine-Learning Techniques for Non-Invasive Monitoring of Hemorrhage

    DTIC Science & Technology

    2010-04-01

    that state-of-the-art machine learning techniques when integrated with novel non-invasive monitoring technologies could detect subtle, physiological...decompensation. Continuous, non-invasively measured hemodynamic signals (e.g., ECG, blood pressures, stroke volume) were used for the development of machine ... learning algorithms. Accuracy estimates were obtained by building models using 27 subjects and testing on the 28th. This process was repeated 28 times

  15. A navigation and control system for an autonomous rescue vehicle in the space station environment

    NASA Technical Reports Server (NTRS)

    Merkel, Lawrence

    1991-01-01

    A navigation and control system was designed and implemented for an orbital autonomous rescue vehicle envisioned to retrieve astronauts or equipment in the case that they become disengaged from the space station. The rescue vehicle, termed the Extra-Vehicular Activity Retriever (EVAR), has an on-board inertial measurement unit ahd GPS receivers for self state estimation, a laser range imager (LRI) and cameras for object state estimation, and a data link for reception of space station state information. The states of the retriever and objects (obstacles and the target object) are estimated by inertial state propagation which is corrected via measurements from the GPS, the LRI system, or the camera system. Kalman filters are utilized to perform sensor fusion and estimate the state propagation errors. Control actuation is performed by a Manned Maneuvering Unit (MMU). Phase plane control techniques are used to control the rotational and translational state of the retriever. The translational controller provides station-keeping or motion along either Clohessy-Wiltshire trajectories or straight line trajectories in the LVLH frame of any sufficiently observed object or of the space station. The software was used to successfully control a prototype EVAR on an air bearing floor facility, and a simulated EVAR operating in a simulated orbital environment. The design of the navigation system and the control system are presented. Also discussed are the hardware systems and the overall software architecture.

  16. Comprehensive Space-Object Characterization using Spectrally Compressive Polarimetric Sensing

    DTIC Science & Technology

    2015-04-08

    90o, 45o, and 135o polarization channels for lin- ear polarization state estimation. This linear polarimetry would satisfy several applications without...persive element. This technique eliminates mechanical movements that hinder conventional polarimetry . The experimental results show clear spatial

  17. A comprehensive review of on-board State-of-Available-Power prediction techniques for lithium-ion batteries in electric vehicles

    NASA Astrophysics Data System (ADS)

    Farmann, Alexander; Sauer, Dirk Uwe

    2016-10-01

    This study provides an overview of available techniques for on-board State-of-Available-Power (SoAP) prediction of lithium-ion batteries (LIBs) in electric vehicles. Different approaches dealing with the on-board estimation of battery State-of-Charge (SoC) or State-of-Health (SoH) have been extensively discussed in various researches in the past. However, the topic of SoAP prediction has not been explored comprehensively yet. The prediction of the maximum power that can be applied to the battery by discharging or charging it during acceleration, regenerative braking and gradient climbing is definitely one of the most challenging tasks of battery management systems. In large lithium-ion battery packs because of many factors, such as temperature distribution, cell-to-cell deviations regarding the actual battery impedance or capacity either in initial or aged state, the use of efficient and reliable methods for battery state estimation is required. The available battery power is limited by the safe operating area (SOA), where SOA is defined by battery temperature, current, voltage and SoC. Accurate SoAP prediction allows the energy management system to regulate the power flow of the vehicle more precisely and optimize battery performance and improve its lifetime accordingly. To this end, scientific and technical literature sources are studied and available approaches are reviewed.

  18. Effect of Extended State Observer and Automatic Voltage Regulator on Synchronous Machine Connected to Infinite Bus Power System

    NASA Astrophysics Data System (ADS)

    Angu, Rittu; Mehta, R. K.

    2018-04-01

    This paper presents a robust controller known as Extended State Observer (ESO) in order to improve the stability and voltage regulation of a synchronous machine connected to an infinite bus power system through a transmission line. The ESO-based control scheme is implemented with an automatic voltage regulator in conjunction with an excitation system to enhance the damping of low frequency power system oscillations, as the Power System Stabilizer (PSS) does. The implementation of PSS excitation control techniques however requires reliable information about the entire states, though they are not always directly measureable. To address this issue, the proposed ESO provides the estimate of system states as well as disturbance state together in order to improve not only the damping but also compensates system efficiently in presence of parameter uncertainties and external disturbances. The Closed-Loop Poles (CLPs) of the system have been assigned by the symmetric root locus technique, with the desired level of system damping provided by the dominant CLPs. The performance of the system is analyzed through simulating at different operating conditions. The control method is not only capable of providing zero estimation error in steady-state, but also shows robustness in tracking the reference command under parametric variations and external disturbances. Illustrative examples have been provided to demonstrate the effectiveness of the developed methodology.

  19. Methods for estimating magnitude and frequency of floods in Montana based on data through 1983

    USGS Publications Warehouse

    Omang, R.J.; Parrett, Charles; Hull, J.A.

    1986-01-01

    Equations are presented for estimating flood magnitudes for ungaged sites in Montana based on data through 1983. The State was divided into eight regions based on hydrologic conditions, and separate multiple regression equations were developed for each region. These equations relate annual flood magnitudes and frequencies to basin characteristics and are applicable only to natural flow streams. In three of the regions, equations also were developed relating flood magnitudes and frequencies to basin characteristics and channel geometry measurements. The standard errors of estimate for an exceedance probability of 1% ranged from 39% to 87%. Techniques are described for estimating annual flood magnitude and flood frequency information at ungaged sites based on data from gaged sites on the same stream. Included are curves relating flood frequency information to drainage area for eight major streams in the State. Maximum known flood magnitudes in Montana are compared with estimated 1 %-chance flood magnitudes and with maximum known floods in the United States. Values of flood magnitudes for selected exceedance probabilities and values of significant basin characteristics and channel geometry measurements for all gaging stations used in the analysis are tabulated. Included are 375 stations in Montana and 28 nearby stations in Canada and adjoining States. (Author 's abstract)

  20. Estimating the State of Aerodynamic Flows in the Presence of Modeling Errors

    NASA Astrophysics Data System (ADS)

    da Silva, Andre F. C.; Colonius, Tim

    2017-11-01

    The ensemble Kalman filter (EnKF) has been proven to be successful in fields such as meteorology, in which high-dimensional nonlinear systems render classical estimation techniques impractical. When the model used to forecast state evolution misrepresents important aspects of the true dynamics, estimator performance may degrade. In this work, parametrization and state augmentation are used to track misspecified boundary conditions (e.g., free stream perturbations). The resolution error is modeled as a Gaussian-distributed random variable with the mean (bias) and variance to be determined. The dynamics of the flow past a NACA 0009 airfoil at high angles of attack and moderate Reynolds number is represented by a Navier-Stokes equations solver with immersed boundaries capabilities. The pressure distribution on the airfoil or the velocity field in the wake, both randomized by synthetic noise, are sampled as measurement data and incorporated into the estimated state and bias following Kalman's analysis scheme. Insights about how to specify the modeling error covariance matrix and its impact on the estimator performance are conveyed. This work has been supported in part by a Grant from AFOSR (FA9550-14-1-0328) with Dr. Douglas Smith as program manager, and by a Science without Borders scholarship from the Ministry of Education of Brazil (Capes Foundation - BEX 12966/13-4).

  1. A review of data fusion techniques.

    PubMed

    Castanedo, Federico

    2013-01-01

    The integration of data and knowledge from several sources is known as data fusion. This paper summarizes the state of the data fusion field and describes the most relevant studies. We first enumerate and explain different classification schemes for data fusion. Then, the most common algorithms are reviewed. These methods and algorithms are presented using three different categories: (i) data association, (ii) state estimation, and (iii) decision fusion.

  2. Methods for calculating forest ecosystem and harvested carbon with standard estimates for forest types of the United States

    Treesearch

    James E. Smith; Linda S. Heath; Kenneth E. Skog; Richard A. Birdsey

    2006-01-01

    This study presents techniques for calculating average net annual additions to carbon in forests and in forest products. Forest ecosystem carbon yield tables, representing stand-level merchantable volume and carbon pools as a function of stand age, were developed for 51 forest types within 10 regions of the United States. Separate tables were developed for...

  3. Robust Characterization of Loss Rates

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2015-08-01

    Many physical implementations of qubits—including ion traps, optical lattices and linear optics—suffer from loss. A nonzero probability of irretrievably losing a qubit can be a substantial obstacle to fault-tolerant methods of processing quantum information, requiring new techniques to safeguard against loss that introduce an additional overhead that depends upon the loss rate. Here we present a scalable and platform-independent protocol for estimating the average loss rate (averaged over all input states) resulting from an arbitrary Markovian noise process, as well as an independent estimate of detector efficiency. Moreover, we show that our protocol gives an additional constraint on estimated parameters from randomized benchmarking that improves the reliability of the estimated error rate and provides a new indicator for non-Markovian signatures in the experimental data. We also derive a bound for the state-dependent loss rate in terms of the average loss rate.

  4. Rainfall: State of the Science

    NASA Astrophysics Data System (ADS)

    Testik, Firat Y.; Gebremichael, Mekonnen

    Rainfall: State of the Science offers the most up-to-date knowledge on the fundamental and practical aspects of rainfall. Each chapter, self-contained and written by prominent scientists in their respective fields, provides three forms of information: fundamental principles, detailed overview of current knowledge and description of existing methods, and emerging techniques and future research directions. The book discusses • Rainfall microphysics: raindrop morphodynamics, interactions, size distribution, and evolution • Rainfall measurement and estimation: ground-based direct measurement (disdrometer and rain gauge), weather radar rainfall estimation, polarimetric radar rainfall estimation, and satellite rainfall estimation • Statistical analyses: intensity-duration-frequency curves, frequency analysis of extreme events, spatial analyses, simulation and disaggregation, ensemble approach for radar rainfall uncertainty, and uncertainty analysis of satellite rainfall products The book is tailored to be an indispensable reference for researchers, practitioners, and graduate students who study any aspect of rainfall or utilize rainfall information in various science and engineering disciplines.

  5. Evaluation of wind field statistics near and inside clouds using a coherent Doppler lidar

    NASA Astrophysics Data System (ADS)

    Lottman, Brian Todd

    1998-09-01

    This work proposes advanced techniques for measuring the spatial wind field statistics near and inside clouds using a vertically pointing solid state coherent Doppler lidar on a fixed ground based platform. The coherent Doppler lidar is an ideal instrument for high spatial and temporal resolution velocity estimates. The basic parameters of lidar are discussed, including a complete statistical description of the Doppler lidar signal. This description is extended to cases with simple functional forms for aerosol backscatter and velocity. An estimate for the mean velocity over a sensing volume is produced by estimating the mean spectra. There are many traditional spectral estimators, which are useful for conditions with slowly varying velocity and backscatter. A new class of estimators (novel) is introduced that produces reliable velocity estimates for conditions with large variations in aerosol backscatter and velocity with range, such as cloud conditions. Performance of traditional and novel estimators is computed for a variety of deterministic atmospheric conditions using computer simulated data. Wind field statistics are produced for actual data for a cloud deck, and for multi- layer clouds. Unique results include detection of possible spectral signatures for rain, estimates for the structure function inside a cloud deck, reliable velocity estimation techniques near and inside thin clouds, and estimates for simple wind field statistics between cloud layers.

  6. Synchrophasor Data Correction under GPS Spoofing Attack: A State Estimation Based Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Xiaoyuan; Du, Liang; Duan, Dongliang

    GPS spoofing attack (GSA) has been shown to be one of the most imminent threats to almost all cyber-physical systems incorporated with the civilian GPS signal. Specifically, for our current agenda of the modernization of the power grid, this may greatly jeopardize the benefits provided by the pervasively installed phasor measurement units (PMU). In this study, we consider the case where synchrophasor data from PMUs are compromised due to the presence of a single GSA, and show that it can be corrected by signal processing techniques. In particular, we introduce a statistical model for synchrophasorbased power system state estimation (SE),more » and then derive the spoofing-matched algorithms for synchrophasor data correction against GPS spoofing attack. Different testing scenarios in IEEE 14-, 30-, 57-, 118-bus systems are simulated to show the proposed algorithms’ performance on GSA detection and state estimation. Numerical results demonstrate that our proposed algorithms can consistently locate and correct the spoofed synchrophasor data with good accuracy as long as the system observability is satisfied. Finally, the accuracy of state estimation is significantly improved compared with the traditional weighted least square method and approaches the performance under the Genie-aided method.« less

  7. Precomputing Process Noise Covariance for Onboard Sequential Filters

    NASA Technical Reports Server (NTRS)

    Olson, Corwin G.; Russell, Ryan P.; Carpenter, J. Russell

    2017-01-01

    Process noise is often used in estimation filters to account for unmodeled and mismodeled accelerations in the dynamics. The process noise covariance acts to inflate the state covariance over propagation intervals, increasing the uncertainty in the state. In scenarios where the acceleration errors change significantly over time, the standard process noise covariance approach can fail to provide effective representation of the state and its uncertainty. Consider covariance analysis techniques provide a method to precompute a process noise covariance profile along a reference trajectory using known model parameter uncertainties. The process noise covariance profile allows significantly improved state estimation and uncertainty representation over the traditional formulation. As a result, estimation performance on par with the consider filter is achieved for trajectories near the reference trajectory without the additional computational cost of the consider filter. The new formulation also has the potential to significantly reduce the trial-and-error tuning currently required of navigation analysts. A linear estimation problem as described in several previous consider covariance analysis studies is used to demonstrate the effectiveness of the precomputed process noise covariance, as well as a nonlinear descent scenario at the asteroid Bennu with optical navigation.

  8. Precomputing Process Noise Covariance for Onboard Sequential Filters

    NASA Technical Reports Server (NTRS)

    Olson, Corwin G.; Russell, Ryan P.; Carpenter, J. Russell

    2017-01-01

    Process noise is often used in estimation filters to account for unmodeled and mismodeled accelerations in the dynamics. The process noise covariance acts to inflate the state covariance over propagation intervals, increasing the uncertainty in the state. In scenarios where the acceleration errors change significantly over time, the standard process noise covariance approach can fail to provide effective representation of the state and its uncertainty. Consider covariance analysis techniques provide a method to precompute a process noise covariance profile along a reference trajectory, using known model parameter uncertainties. The process noise covariance profile allows significantly improved state estimation and uncertainty representation over the traditional formulation. As a result, estimation performance on par with the consider filter is achieved for trajectories near the reference trajectory without the additional computational cost of the consider filter. The new formulation also has the potential to significantly reduce the trial-and-error tuning currently required of navigation analysts. A linear estimation problem as described in several previous consider covariance analysis publications is used to demonstrate the effectiveness of the precomputed process noise covariance, as well as a nonlinear descent scenario at the asteroid Bennu with optical navigation.

  9. An adaptive observer for on-line tool wear estimation in turning, Part I: Theory

    NASA Astrophysics Data System (ADS)

    Danai, Kourosh; Ulsoy, A. Galip

    1987-04-01

    On-line sensing of tool wear has been a long-standing goal of the manufacturing engineering community. In the absence of any reliable on-line tool wear sensors, a new model-based approach for tool wear estimation has been proposed. This approach is an adaptive observer, based on force measurement, which uses both parameter and state estimation techniques. The design of the adaptive observer is based upon a dynamic state model of tool wear in turning. This paper (Part I) presents the model, and explains its use as the basis for the adaptive observer design. This model uses flank wear and crater wear as state variables, feed as the input, and the cutting force as the output. The suitability of the model as the basis for adaptive observation is also verified. The implementation of the adaptive observer requires the design of a state observer and a parameter estimator. To obtain the model parameters for tuning the adaptive observer procedures for linearisation of the non-linear model are specified. The implementation of the adaptive observer in turning and experimental results are presented in a companion paper (Part II).

  10. Synchrophasor Data Correction under GPS Spoofing Attack: A State Estimation Based Approach

    DOE PAGES

    Fan, Xiaoyuan; Du, Liang; Duan, Dongliang

    2017-02-01

    GPS spoofing attack (GSA) has been shown to be one of the most imminent threats to almost all cyber-physical systems incorporated with the civilian GPS signal. Specifically, for our current agenda of the modernization of the power grid, this may greatly jeopardize the benefits provided by the pervasively installed phasor measurement units (PMU). In this study, we consider the case where synchrophasor data from PMUs are compromised due to the presence of a single GSA, and show that it can be corrected by signal processing techniques. In particular, we introduce a statistical model for synchrophasorbased power system state estimation (SE),more » and then derive the spoofing-matched algorithms for synchrophasor data correction against GPS spoofing attack. Different testing scenarios in IEEE 14-, 30-, 57-, 118-bus systems are simulated to show the proposed algorithms’ performance on GSA detection and state estimation. Numerical results demonstrate that our proposed algorithms can consistently locate and correct the spoofed synchrophasor data with good accuracy as long as the system observability is satisfied. Finally, the accuracy of state estimation is significantly improved compared with the traditional weighted least square method and approaches the performance under the Genie-aided method.« less

  11. NREL Triples Previous Estimates of U.S. Wind Power Potential (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The National Renewable Energy Laboratory (NREL) recently released new estimates of the U.S. potential for wind-generated electricity, using advanced wind mapping and validation techniques to triple previous estimates of the size of the nation's wind resources. The new study, conducted by NREL and AWS TruePower, finds that the contiguous 48 states have the potential to generate up to 37 million gigawatt-hours annually. In comparison, the total U.S. electricity generation from all sources was roughly 4 million gigawatt-hours in 2009.

  12. Fuzzy logic modeling of high performance rechargeable batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, P.; Fennie, C. Jr.; Reisner, D.E.

    1998-07-01

    Accurate battery state-of-charge (SOC) measurements are critical in many portable electronic device applications. Yet conventional techniques for battery SOC estimation are limited in their accuracy, reliability, and flexibility. In this paper the authors present a powerful new approach to estimate battery SOC using a fuzzy logic-based methodology. This approach provides a universally applicable, accurate method for battery SOC estimation either integrated within, or as an external monitor to, an electronic device. The methodology is demonstrated in modeling impedance measurements on Ni-MH cells and discharge voltage curves of Li-ion cells.

  13. Estimation and filtering techniques for high-accuracy GPS applications

    NASA Technical Reports Server (NTRS)

    Lichten, S. M.

    1989-01-01

    Techniques for determination of very precise orbits for satellites of the Global Positioning System (GPS) are currently being studied and demonstrated. These techniques can be used to make cm-accurate measurements of station locations relative to the geocenter, monitor earth orientation over timescales of hours, and provide tropospheric and clock delay calibrations during observations made with deep space radio antennas at sites where the GPS receivers have been collocated. For high-earth orbiters, meter-level knowledge of position will be available from GPS, while at low altitudes, sub-decimeter accuracy will be possible. Estimation of satellite orbits and other parameters such as ground station positions is carried out with a multi-satellite batch sequential pseudo-epoch state process noise filter. Both square-root information filtering (SRIF) and UD-factorized covariance filtering formulations are implemented in the software.

  14. 78 FR 11135 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-15

    ..., electronic, mechanical or other technological collection techniques or other forms of information technology... unless it displays a currently valid OMB control number. National Agricultural Statistics Service Title... National Agricultural Statistics Service (NASS) is to prepare and issue State and national estimates of...

  15. Coal Thickness Gauging Using Elastic Waves

    NASA Technical Reports Server (NTRS)

    Nazarian, Soheil; Bar-Cohen, Yoseph

    1999-01-01

    The efforts of a mining crew can be optimized, if the thickness of the coal layers to be excavated is known before excavation. Wave propagation techniques can be used to estimate the thickness of the layer based on the contrast in the wave velocity between coal and rock beyond it. Another advantage of repeated wave measurement is that the state of the stress within the mine can be estimated. The state of the stress can be used in many safety-related decisions made during the operation of the mine. Given these two advantages, a study was carried out to determine the feasibility of the methodology. The results are presented herein.

  16. Rotorcraft system identification techniques for handling qualities and stability and control evaluation

    NASA Technical Reports Server (NTRS)

    Hall, W. E., Jr.; Gupta, N. K.; Hansen, R. S.

    1978-01-01

    An integrated approach to rotorcraft system identification is described. This approach consists of sequential application of (1) data filtering to estimate states of the system and sensor errors, (2) model structure estimation to isolate significant model effects, and (3) parameter identification to quantify the coefficient of the model. An input design algorithm is described which can be used to design control inputs which maximize parameter estimation accuracy. Details of each aspect of the rotorcraft identification approach are given. Examples of both simulated and actual flight data processing are given to illustrate each phase of processing. The procedure is shown to provide means of calibrating sensor errors in flight data, quantifying high order state variable models from the flight data, and consequently computing related stability and control design models.

  17. Analysis of distortion data from TF30-P-3 mixed compression inlet test

    NASA Technical Reports Server (NTRS)

    King, R. W.; Schuerman, J. A.; Muller, R. G.

    1976-01-01

    A program was conducted to reduce and analyze inlet and engine data obtained during testing of a TF30-P-3 engine operating behind a mixed compression inlet. Previously developed distortion analysis techniques were applied to the data to assist in the development of a new distortion methodology. Instantaneous distortion techniques were refined as part of the distortion methodology development. A technique for estimating maximum levels of instantaneous distortion from steady state and average turbulence data was also developed as part of the program.

  18. Magnetic gaps in organic tri-radicals: From a simple model to accurate estimates.

    PubMed

    Barone, Vincenzo; Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo

    2017-03-14

    The calculation of the energy gap between the magnetic states of organic poly-radicals still represents a challenging playground for quantum chemistry, and high-level techniques are required to obtain accurate estimates. On these grounds, the aim of the present study is twofold. From the one side, it shows that, thanks to recent algorithmic and technical improvements, we are able to compute reliable quantum mechanical results for the systems of current fundamental and technological interest. From the other side, proper parameterization of a simple Hubbard Hamiltonian allows for a sound rationalization of magnetic gaps in terms of basic physical effects, unraveling the role played by electron delocalization, Coulomb repulsion, and effective exchange in tuning the magnetic character of the ground state. As case studies, we have chosen three prototypical organic tri-radicals, namely, 1,3,5-trimethylenebenzene, 1,3,5-tridehydrobenzene, and 1,2,3-tridehydrobenzene, which differ either for geometric or electronic structure. After discussing the differences among the three species and their consequences on the magnetic properties in terms of the simple model mentioned above, accurate and reliable values for the energy gap between the lowest quartet and doublet states are computed by means of the so-called difference dedicated configuration interaction (DDCI) technique, and the final results are discussed and compared to both available experimental and computational estimates.

  19. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  20. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  1. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2005-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  2. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  3. Tree-level imputation techniques to estimate current plot-level attributes in the Pacific Northwest using paneled inventory data

    Treesearch

    Bianca Eskelson; Temesgen Hailemariam; Tara Barrett

    2009-01-01

    The Forest Inventory and Analysis program (FIA) of the US Forest Service conducts a nationwide annual inventory. One panel (20% or 10% of all plots in the eastern and western United States, respectively) is measured each year. The precision of the estimates for any given year from one panel is low, and the moving average (MA), which is considered to be the default...

  4. Remote sensing in Iowa agriculture. [cropland inventory, soils, forestland, and crop diseases

    NASA Technical Reports Server (NTRS)

    Mahlstede, J. P. (Principal Investigator); Carlson, R. E.

    1973-01-01

    The author has identified the following significant results. Results include the estimation of forested and crop vegetation acreages using the ERTS-1 imagery. The methods used to achieve these estimates still require refinement, but the results appear promising. Practical applications would be directed toward achieving current land use inventories of these natural resources. This data is presently collected by sampling type surveys. If ERTS-1 can observe this and area estimates can be determined accurately, then a step forward has been achieved. Cost benefit relationship will have to be favorable. Problems still exist in these estimation techniques due to the diversity of the scene observed in the ERTS-1 imagery covering other part of Iowa. This is due to influence of topography and soils upon the adaptability of the vegetation to specific areas of the state. The state mosaic produced from ERTS-1 imagery shows these patterns very well. Research directed to acreage estimates is continuing.

  5. Study of the extra-ionic electron distributions in semi-metallic structures by nuclear quadrupole resonance techniques

    NASA Technical Reports Server (NTRS)

    Murty, A. N.

    1976-01-01

    A straightforward self-consistent method was developed to estimate solid state electrostatic potentials, fields and field gradients in ionic solids. The method is a direct practical application of basic electrostatics to solid state and also helps in the understanding of the principles of crystal structure. The necessary mathematical equations, derived from first principles, were presented and the systematic computational procedure developed to arrive at the solid state electrostatic field gradients values was given.

  6. Energy awareness for supercapacitors using Kalman filter state-of-charge tracking

    NASA Astrophysics Data System (ADS)

    Nadeau, Andrew; Hassanalieragh, Moeen; Sharma, Gaurav; Soyata, Tolga

    2015-11-01

    Among energy buffering alternatives, supercapacitors can provide unmatched efficiency and durability. Additionally, the direct relation between a supercapacitor's terminal voltage and stored energy can improve energy awareness. However, a simple capacitive approximation cannot adequately represent the stored energy in a supercapacitor. It is shown that the three branch equivalent circuit model provides more accurate energy awareness. This equivalent circuit uses three capacitances and associated resistances to represent the supercapacitor's internal SOC (state-of-charge). However, the SOC cannot be determined from one observation of the terminal voltage, and must be tracked over time using inexact measurements. We present: 1) a Kalman filtering solution for tracking the SOC; 2) an on-line system identification procedure to efficiently estimate the equivalent circuit's parameters; and 3) experimental validation of both parameter estimation and SOC tracking for 5 F, 10 F, 50 F, and 350 F supercapacitors. Validation is done within the operating range of a solar powered application and the associated power variability due to energy harvesting. The proposed techniques are benchmarked against the simple capacitive model and prior parameter estimation techniques, and provide a 67% reduction in root-mean-square error for predicting usable buffered energy.

  7. Efficient implementation of a real-time estimation system for thalamocortical hidden Parkinsonian properties

    NASA Astrophysics Data System (ADS)

    Yang, Shuangming; Deng, Bin; Wang, Jiang; Li, Huiyan; Liu, Chen; Fietkiewicz, Chris; Loparo, Kenneth A.

    2017-01-01

    Real-time estimation of dynamical characteristics of thalamocortical cells, such as dynamics of ion channels and membrane potentials, is useful and essential in the study of the thalamus in Parkinsonian state. However, measuring the dynamical properties of ion channels is extremely challenging experimentally and even impossible in clinical applications. This paper presents and evaluates a real-time estimation system for thalamocortical hidden properties. For the sake of efficiency, we use a field programmable gate array for strictly hardware-based computation and algorithm optimization. In the proposed system, the FPGA-based unscented Kalman filter is implemented into a conductance-based TC neuron model. Since the complexity of TC neuron model restrains its hardware implementation in parallel structure, a cost efficient model is proposed to reduce the resource cost while retaining the relevant ionic dynamics. Experimental results demonstrate the real-time capability to estimate thalamocortical hidden properties with high precision under both normal and Parkinsonian states. While it is applied to estimate the hidden properties of the thalamus and explore the mechanism of the Parkinsonian state, the proposed method can be useful in the dynamic clamp technique of the electrophysiological experiments, the neural control engineering and brain-machine interface studies.

  8. Optimal electrode selection for multi-channel electroencephalogram based detection of auditory steady-state responses.

    PubMed

    Van Dun, Bram; Wouters, Jan; Moonen, Marc

    2009-07-01

    Auditory steady-state responses (ASSRs) are used for hearing threshold estimation at audiometric frequencies. Hearing impaired newborns, in particular, benefit from this technique as it allows for a more precise diagnosis than traditional techniques, and a hearing aid can be better fitted at an early age. However, measurement duration of current single-channel techniques is still too long for clinical widespread use. This paper evaluates the practical performance of a multi-channel electroencephalogram (EEG) processing strategy based on a detection theory approach. A minimum electrode set is determined for ASSRs with frequencies between 80 and 110 Hz using eight-channel EEG measurements of ten normal-hearing adults. This set provides a near-optimal hearing threshold estimate for all subjects and improves response detection significantly for EEG data with numerous artifacts. Multi-channel processing does not significantly improve response detection for EEG data with few artifacts. In this case, best response detection is obtained when noise-weighted averaging is applied on single-channel data. The same test setup (eight channels, ten normal-hearing subjects) is also used to determine a minimum electrode setup for 10-Hz ASSRs. This configuration allows to record near-optimal signal-to-noise ratios for 80% of subjects.

  9. Event-Based Sensing and Control for Remote Robot Guidance: An Experimental Case

    PubMed Central

    Santos, Carlos; Martínez-Rey, Miguel; Santiso, Enrique

    2017-01-01

    This paper describes the theoretical and practical foundations for remote control of a mobile robot for nonlinear trajectory tracking using an external localisation sensor. It constitutes a classical networked control system, whereby event-based techniques for both control and state estimation contribute to efficient use of communications and reduce sensor activity. Measurement requests are dictated by an event-based state estimator by setting an upper bound to the estimation error covariance matrix. The rest of the time, state prediction is carried out with the Unscented transformation. This prediction method makes it possible to select the appropriate instants at which to perform actuations on the robot so that guidance performance does not degrade below a certain threshold. Ultimately, we obtained a combined event-based control and estimation solution that drastically reduces communication accesses. The magnitude of this reduction is set according to the tracking error margin of a P3-DX robot following a nonlinear trajectory, remotely controlled with a mini PC and whose pose is detected by a camera sensor. PMID:28878144

  10. Excited-State Effective Masses in Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George Fleming, Saul Cohen, Huey-Wen Lin

    2009-10-01

    We apply black-box methods, i.e. where the performance of the method does not depend upon initial guesses, to extract excited-state energies from Euclidean-time hadron correlation functions. In particular, we extend the widely used effective-mass method to incorporate multiple correlation functions and produce effective mass estimates for multiple excited states. In general, these excited-state effective masses will be determined by finding the roots of some polynomial. We demonstrate the method using sample lattice data to determine excited-state energies of the nucleon and compare the results to other energy-level finding techniques.

  11. Effects of line-of-sight velocity on spaced-antenna measurements, part 3.5A

    NASA Technical Reports Server (NTRS)

    Royrvik, O.

    1984-01-01

    Horizontal wind velocities in the upper atmosphere, particularly the mesosphere, have been measured using a multitude of different techniques. Most techniques are based on stated or unstated assumptions about the wind field that may or may not be true. Some problems with the spaced antenna drifts (SAD) technique that usually appear to be overlooked are investigated. These problems are not unique to the SAD technique; very similar considerations apply to measurement of horizontal wind using multiple-beam Doppler radars as well. Simply stated, the SAD technique relies on scattering from multiple scatterers within an antenna beam of fairly large beam width. The combination of signals with random phase gives rise to an interference pattern on the ground. This pattern will drift across the ground with a velocity twice that of the ionospheric irregularities from which the radar signals are scattered. By using spaced receivers and measuring time delays of the signal fading in different antennas, it is possible to estimate the horizontal drift velocities.

  12. Real-Time Parameter Estimation in the Frequency Domain

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented

  13. A Review of Data Fusion Techniques

    PubMed Central

    2013-01-01

    The integration of data and knowledge from several sources is known as data fusion. This paper summarizes the state of the data fusion field and describes the most relevant studies. We first enumerate and explain different classification schemes for data fusion. Then, the most common algorithms are reviewed. These methods and algorithms are presented using three different categories: (i) data association, (ii) state estimation, and (iii) decision fusion. PMID:24288502

  14. Tropical Cyclone Intensity Estimation Using Deep Convolutional Neural Networks

    NASA Technical Reports Server (NTRS)

    Maskey, Manil; Cecil, Dan; Ramachandran, Rahul; Miller, Jeffrey J.

    2018-01-01

    Estimating tropical cyclone intensity by just using satellite image is a challenging problem. With successful application of the Dvorak technique for more than 30 years along with some modifications and improvements, it is still used worldwide for tropical cyclone intensity estimation. A number of semi-automated techniques have been derived using the original Dvorak technique. However, these techniques suffer from subjective bias as evident from the most recent estimations on October 10, 2017 at 1500 UTC for Tropical Storm Ophelia: The Dvorak intensity estimates ranged from T2.3/33 kt (Tropical Cyclone Number 2.3/33 knots) from UW-CIMSS (University of Wisconsin-Madison - Cooperative Institute for Meteorological Satellite Studies) to T3.0/45 kt from TAFB (the National Hurricane Center's Tropical Analysis and Forecast Branch) to T4.0/65 kt from SAB (NOAA/NESDIS Satellite Analysis Branch). In this particular case, two human experts at TAFB and SAB differed by 20 knots in their Dvorak analyses, and the automated version at the University of Wisconsin was 12 knots lower than either of them. The National Hurricane Center (NHC) estimates about 10-20 percent uncertainty in its post analysis when only satellite based estimates are available. The success of the Dvorak technique proves that spatial patterns in infrared (IR) imagery strongly relate to tropical cyclone intensity. This study aims to utilize deep learning, the current state of the art in pattern recognition and image recognition, to address the need for an automated and objective tropical cyclone intensity estimation. Deep learning is a multi-layer neural network consisting of several layers of simple computational units. It learns discriminative features without relying on a human expert to identify which features are important. Our study mainly focuses on convolutional neural network (CNN), a deep learning algorithm, to develop an objective tropical cyclone intensity estimation. CNN is a supervised learning algorithm requiring a large number of training data. Since the archives of intensity data and tropical cyclone centric satellite images is openly available for use, the training data is easily created by combining the two. Results, case studies, prototypes, and advantages of this approach will be discussed.

  15. Techniques for estimating streamflow characteristics in the Eastern and Interior coal provinces of the United States

    USGS Publications Warehouse

    Wetzel, Kim L.; Bettandorff, J.M.

    1986-01-01

    Techniques are presented for estimating various streamflow characteristics, such as peak flows, mean monthly and annual flows, flow durations, and flow volumes, at ungaged sites on unregulated streams in the Eastern Coal region. Streamflow data and basin characteristics for 629 gaging stations were used to develop multiple-linear-regression equations. Separate equations were developed for the Eastern and Interior Coal Provinces. Drainage area is an independent variable common to all equations. Other variables needed, depending on the streamflow characteristic, are mean annual precipitation, mean basin elevation, main channel length, basin storage, main channel slope, and forest cover. A ratio of the observed 50- to 90-percent flow durations was used in the development of relations to estimate low-flow frequencies in the Eastern Coal Province. Relations to estimate low flows in the Interior Coal Province are not presented because the standard errors were greater than 0.7500 log units and were considered to be of poor reliability.

  16. Techniques for estimating magnitude and frequency of floods in Minnesota

    USGS Publications Warehouse

    Guetzkow, Lowell C.

    1977-01-01

     Estimating relations have been developed to provide engineers and designers with improved techniques for defining flow-frequency characteristics to satisfy hydraulic planning and design requirements. The magnitude and frequency of floods up to the 100-year recurrence interval can be determined for most streams in Minnesota by methods presented. By multiple regression analysis, equations have been developed for estimating flood-frequency relations at ungaged sites on natural flow streams. Eight distinct hydrologic regions are delineated within the State with boundaries defined generally by river basin divides. Regression equations are provided for each region which relate selected frequency floods to significant basin parameters. For main-stem streams, graphs are presented showing floods for selected recurrence intervals plotted against contributing drainage area. Flow-frequency estimates for intervening sites along the Minnesota River, Mississippi River, and the Red River of the North can be derived from these graphs. Flood-frequency characteristics are tabulated for 201 paging stations having 10 or more years of record.

  17. Microanalysis of iron oxidation states in earth and planetary materials

    NASA Astrophysics Data System (ADS)

    Bajt, S.; Sutton, S. R.; Delaney, J. S.

    1995-02-01

    Initial studies have been made on quantifying Fe oxidation states in different iron-bearing minerals using K-edge XANES. The energy of a weak pre-edge peak in the XANES spectrum due to 1s-3d electron transition was used to quantify ferric/ferrous ratios with microprobe spatial resolution. The estimated accuracy of the technique was +/- 10% in terms of Fe3+/((Fe2+ + Fe3+)). The detection limit was ~ 100 ppm with a synchrotron beam of ~ 100 μm in diameter. The pre-edge peak energy in well-characterized samples with known Fe oxidation states was found to be a linear function of the ferric/(ferrous) ratio. The technique was applied to altered magnetics (ideally Fe3O4), and various silicates and oxides from meteorites.

  18. Precision of four otolith techniques for estimating age of white perch from a thermally altered reservoir

    USGS Publications Warehouse

    Snow, Richard A.; Porta, Michael J.; Long, James M.

    2018-01-01

    The White Perch Morone americana is an invasive species in many Midwestern states and is widely distributed in reservoir systems, yet little is known about the species' age structure and population dynamics. White Perch were first observed in Sooner Reservoir, a thermally altered cooling reservoir in Oklahoma, by the Oklahoma Department of Wildlife Conservation in 2006. It is unknown how thermally altered systems like Sooner Reservoir may affect the precision of White Perch age estimates. Previous studies have found that age structures from Largemouth Bass Micropterus salmoides and Bluegills Lepomis macrochirus from thermally altered reservoirs had false annuli, which increased error when estimating ages. Our objective was to quantify the precision of White Perch age estimates using four sagittal otolith preparation techniques (whole, broken, browned, and stained). Because Sooner Reservoir is thermally altered, we also wanted to identify the best month to collect a White Perch age sample based on aging precision. Ages of 569 White Perch (20–308 mm TL) were estimated using the four techniques. Age estimates from broken, stained, and browned otoliths ranged from 0 to 8 years; whole‐view otolith age estimates ranged from 0 to 7 years. The lowest mean coefficient of variation (CV) was obtained using broken otoliths, whereas the highest CV was observed using browned otoliths. July was the most precise month (lowest mean CV) for estimating age of White Perch, whereas April was the least precise month (highest mean CV). These results underscore the importance of knowing the best method to prepare otoliths for achieving the most precise age estimates and the best time of year to obtain those samples, as these factors may affect other estimates of population dynamics.

  19. A comparison of United States and United Kingdom EQ-5D health states valuations using a nonparametric Bayesian method.

    PubMed

    Kharroubi, Samer A; O'Hagan, Anthony; Brazier, John E

    2010-07-10

    Cost-effectiveness analysis of alternative medical treatments relies on having a measure of effectiveness, and many regard the quality adjusted life year (QALY) to be the current 'gold standard.' In order to compute QALYs, we require a suitable system for describing a person's health state, and a utility measure to value the quality of life associated with each possible state. There are a number of different health state descriptive systems, and we focus here on one known as the EQ-5D. Data for estimating utilities for different health states have a number of features that mean care is necessary in statistical modelling.There is interest in the extent to which valuations of health may differ between different countries and cultures, but few studies have compared preference values of health states obtained from different countries. This article applies a nonparametric model to estimate and compare EQ-5D health state valuation data obtained from two countries using Bayesian methods. The data set is the US and UK EQ-5D valuation studies where a sample of 42 states defined by the EQ-5D was valued by representative samples of the general population from each country using the time trade-off technique. We estimate a utility function across both countries which explicitly accounts for the differences between them, and is estimated using the data from both countries. The article discusses the implications of these results for future applications of the EQ-5D and for further work in this field. Copyright 2010 John Wiley & Sons, Ltd.

  20. A variational approach to parameter estimation in ordinary differential equations.

    PubMed

    Kaschek, Daniel; Timmer, Jens

    2012-08-14

    Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  1. An estimator-predictor approach to PLL loop filter design

    NASA Technical Reports Server (NTRS)

    Statman, J. I.; Hurd, W. J.

    1986-01-01

    An approach to the design of digital phase locked loops (DPLLs), using estimation theory concepts in the selection of a loop filter, is presented. The key concept is that the DPLL closed-loop transfer function is decomposed into an estimator and a predictor. The estimator provides recursive estimates of phase, frequency, and higher order derivatives, while the predictor compensates for the transport lag inherent in the loop. This decomposition results in a straightforward loop filter design procedure, enabling use of techniques from optimal and sub-optimal estimation theory. A design example for a particular choice of estimator is presented, followed by analysis of the associated bandwidth, gain margin, and steady state errors caused by unmodeled dynamics. This approach is under consideration for the design of the Deep Space Network (DSN) Advanced Receiver Carrier DPLL.

  2. Rare Event Simulation in Radiation Transport

    NASA Astrophysics Data System (ADS)

    Kollman, Craig

    This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous state space is made. This involves partitioning the space into a finite number of cells. There is a tradeoff between additional computation per iteration and variance reduction per iteration that arises in determining the optimal grid size. All versions of this algorithm can be thought of as a compromise between deterministic and Monte Carlo methods, capturing advantages of both techniques.

  3. Improved statistical fluctuation analysis for measurement-device-independent quantum key distribution with four-intensity decoy-state method.

    PubMed

    Mao, Chen-Chen; Zhou, Xing-Yu; Zhu, Jian-Rong; Zhang, Chun-Hui; Zhang, Chun-Mei; Wang, Qin

    2018-05-14

    Recently Zhang et al [ Phys. Rev. A95, 012333 (2017)] developed a new approach to estimate the failure probability for the decoy-state BB84 QKD system when taking finite-size key effect into account, which offers security comparable to Chernoff bound, while results in an improved key rate and transmission distance. Based on Zhang et al's work, now we extend this approach to the case of the measurement-device-independent quantum key distribution (MDI-QKD), and for the first time implement it onto the four-intensity decoy-state MDI-QKD system. Moreover, through utilizing joint constraints and collective error-estimation techniques, we can obviously increase the performance of practical MDI-QKD systems compared with either three- or four-intensity decoy-state MDI-QKD using Chernoff bound analysis, and achieve much higher level security compared with those applying Gaussian approximation analysis.

  4. Data Assimilation to Extract Soil Moisture Information From SMAP Observations

    NASA Technical Reports Server (NTRS)

    Kolassa, J.; Reichle, R. H.; Liu, Q.; Alemohammad, S. H.; Gentine, P.

    2017-01-01

    Statistical techniques permit the retrieval of soil moisture estimates in a model climatology while retaining the spatial and temporal signatures of the satellite observations. As a consequence, they can be used to reduce the need for localized bias correction techniques typically implemented in data assimilation (DA) systems that tend to remove some of the independent information provided by satellite observations. Here, we use a statistical neural network (NN) algorithm to retrieve SMAP (Soil Moisture Active Passive) surface soil moisture estimates in the climatology of the NASA Catchment land surface model. Assimilating these estimates without additional bias correction is found to significantly reduce the model error and increase the temporal correlation against SMAP CalVal in situ observations over the contiguous United States. A comparison with assimilation experiments using traditional bias correction techniques shows that the NN approach better retains the independent information provided by the SMAP observations and thus leads to larger model skill improvements during the assimilation. A comparison with the SMAP Level 4 product shows that the NN approach is able to provide comparable skill improvements and thus represents a viable assimilation approach.

  5. Hyper-X Mach 10 Trajectory Reconstruction

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Martin, John G.; Tartabini, Paul V.; Thornblom, Mark N.

    2005-01-01

    This paper discusses the formulation and development of a trajectory reconstruction tool for the NASA X-43A/Hyper-X high speed research vehicle, and its implementation for the reconstruction and analysis of flight test data. Extended Kalman filtering techniques are employed to reconstruct the trajectory of the vehicle, based upon numerical integration of inertial measurement data along with redundant measurements of the vehicle state. The equations of motion are formulated in order to include the effects of several systematic error sources, whose values may also be estimated by the filtering routines. Additionally, smoothing algorithms have been implemented in which the final value of the state (or an augmented state that includes other systematic error parameters to be estimated) and covariance are propagated back to the initial time to generate the best-estimated trajectory, based upon all available data. The methods are applied to the problem of reconstructing the trajectory of the Hyper-X vehicle from data obtained during the Mach 10 test flight, which occurred on November 16th 2004.

  6. Uncertainty Management for Diagnostics and Prognostics of Batteries using Bayesian Techniques

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar; Goebel, kai

    2007-01-01

    Uncertainty management has always been the key hurdle faced by diagnostics and prognostics algorithms. A Bayesian treatment of this problem provides an elegant and theoretically sound approach to the modern Condition- Based Maintenance (CBM)/Prognostic Health Management (PHM) paradigm. The application of the Bayesian techniques to regression and classification in the form of Relevance Vector Machine (RVM), and to state estimation as in Particle Filters (PF), provides a powerful tool to integrate the diagnosis and prognosis of battery health. The RVM, which is a Bayesian treatment of the Support Vector Machine (SVM), is used for model identification, while the PF framework uses the learnt model, statistical estimates of noise and anticipated operational conditions to provide estimates of remaining useful life (RUL) in the form of a probability density function (PDF). This type of prognostics generates a significant value addition to the management of any operation involving electrical systems.

  7. A technique for estimating time of concentration and storage coefficient values for Illinois streams

    USGS Publications Warehouse

    Graf, Julia B.; Garklavs, George; Oberg, Kevin A.

    1982-01-01

    Values of the unit hydrograph parameters time of concentration (TC) and storage coefficient (R) can be estimated for streams in Illinois by a two-step technique developed from data for 98 gaged basins in the State. The sum of TC and R is related to stream length (L) and main channel slope (S) by the relation (TC + R)e = 35.2L0.39S-0.78. The variable R/(TC + R) is not significantly correlated with drainage area, slope, or length, but does exhibit a regional trend. Regional values of R/(TC + R) are used with the computed values of (TC + R)e to solve for estimated values of time of concentration (TCe) and storage coefficient (Re). The use of the variable R/(TC + R) is thought to account for variations in unit hydrograph parameters caused by physiographic variables such as basin topography, flood-plain development, and basin storage characteristics. (USGS)

  8. State of balance of the cryosphere

    NASA Technical Reports Server (NTRS)

    Van Der Veen, C. J.

    1991-01-01

    Available observations and mass balance estimates of the cryosphere are summarized. Problems discussed include mountain glaciers, the Greenland ice sheet, the Antarctic ice sheet, conventional glacier measurement techniques, and satellite applications in glacier mass balance studies. It is concluded that the interior part of the Greenland ice sheet is thickening or in near equilibrium. Estimates of the mass balance of the Antarctic ice sheet suggest that it is positive, although the error limits allow for a slightly negative balance.

  9. Dynamic connectivity regression: Determining state-related changes in brain connectivity

    PubMed Central

    Cribben, Ivor; Haraldsdottir, Ragnheidur; Atlas, Lauren Y.; Wager, Tor D.; Lindquist, Martin A.

    2014-01-01

    Most statistical analyses of fMRI data assume that the nature, timing and duration of the psychological processes being studied are known. However, often it is hard to specify this information a priori. In this work we introduce a data-driven technique for partitioning the experimental time course into distinct temporal intervals with different multivariate functional connectivity patterns between a set of regions of interest (ROIs). The technique, called Dynamic Connectivity Regression (DCR), detects temporal change points in functional connectivity and estimates a graph, or set of relationships between ROIs, for data in the temporal partition that falls between pairs of change points. Hence, DCR allows for estimation of both the time of change in connectivity and the connectivity graph for each partition, without requiring prior knowledge of the nature of the experimental design. Permutation and bootstrapping methods are used to perform inference on the change points. The method is applied to various simulated data sets as well as to an fMRI data set from a study (N=26) of a state anxiety induction using a socially evaluative threat challenge. The results illustrate the method’s ability to observe how the networks between different brain regions changed with subjects’ emotional state. PMID:22484408

  10. Nationwide summary of US Geological Survey regional regression equations for estimating magnitude and frequency of floods for ungaged sites, 1993

    USGS Publications Warehouse

    Jennings, M.E.; Thomas, W.O.; Riggs, H.C.

    1994-01-01

    For many years, the U.S. Geological Survey (USGS) has been involved in the development of regional regression equations for estimating flood magnitude and frequency at ungaged sites. These regression equations are used to transfer flood characteristics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally these equations have been developed on a statewide or metropolitan area basis as part of cooperative study programs with specific State Departments of Transportation or specific cities. The USGS, in cooperation with the Federal Highway Administration and the Federal Emergency Management Agency, has compiled all the current (as of September 1993) statewide and metropolitan area regression equations into a micro-computer program titled the National Flood Frequency Program.This program includes regression equations for estimating flood-peak discharges and techniques for estimating a typical flood hydrograph for a given recurrence interval peak discharge for unregulated rural and urban watersheds. These techniques should be useful to engineers and hydrologists for planning and design applications. This report summarizes the statewide regression equations for rural watersheds in each State, summarizes the applicable metropolitan area or statewide regression equations for urban watersheds, describes the National Flood Frequency Program for making these computations, and provides much of the reference information on the extrapolation variables needed to run the program.

  11. Extracting Loop Bounds for WCET Analysis Using the Instrumentation Point Graph

    NASA Astrophysics Data System (ADS)

    Betts, A.; Bernat, G.

    2009-05-01

    Every calculation engine proposed in the literature of Worst-Case Execution Time (WCET) analysis requires upper bounds on loop iterations. Existing mechanisms to procure this information are either error prone, because they are gathered from the end-user, or limited in scope, because automatic analyses target very specific loop structures. In this paper, we present a technique that obtains bounds completely automatically for arbitrary loop structures. In particular, we show how to employ the Instrumentation Point Graph (IPG) to parse traces of execution (generated by an instrumented program) in order to extract bounds relative to any loop-nesting level. With this technique, therefore, non-rectangular dependencies between loops can be captured, allowing more accurate WCET estimates to be calculated. We demonstrate the improvement in accuracy by comparing WCET estimates computed through our HMB framework against those computed with state-of-the-art techniques.

  12. Steady-state evoked potentials possibilities for mental-state estimation

    NASA Technical Reports Server (NTRS)

    Junker, Andrew M.; Schnurer, John H.; Ingle, David F.; Downey, Craig W.

    1988-01-01

    The use of the human steady-state evoked potential (SSEP) as a possible measure of mental-state estimation is explored. A method for evoking a visual response to a sum-of-ten sine waves is presented. This approach provides simultaneous multiple frequency measurements of the human EEG to the evoking stimulus in terms of describing functions (gain and phase) and remnant spectra. Ways in which these quantities vary with the addition of performance tasks (manual tracking, grammatical reasoning, and decision making) are presented. Models of the describing function measures can be formulated using systems engineering technology. Relationships between model parameters and performance scores during manual tracking are discussed. Problems of unresponsiveness and lack of repeatability of subject responses are addressed in terms of a need for loop closure of the SSEP. A technique to achieve loop closure using a lock-in amplifier approach is presented. Results of a study designed to test the effectiveness of using feedback to consciously connect humans to their evoked response are presented. Findings indicate that conscious control of EEG is possible. Implications of these results in terms of secondary tasks for mental-state estimation and brain actuated control are addressed.

  13. Experimental determination of Grunieisen gamma for two dissimilar materials (PEEK and Al 5083) via the shock-reverberation technique

    NASA Astrophysics Data System (ADS)

    Roberts, Andrew; Appleby-Thomas, Gareth; Hazell, Paul

    2011-06-01

    Following multiple loading events the resultant shock state of a material will lie away from the principle Hugoniot. Prediction of such states requires knowledge of a materials equation-of-state. The material-specific variable Grunieisen gamma (Γ) defines the shape of ``off-Hugoniot'' points in energy-volume-pressure space. Experimentally the shock-reverberation technique (based on the principle of impedance-matching) has previously allowed estimation of the first-order Grunieisen gamma term (Γ1) for a silicone elastomer. Here, this approach was employed to calculate Γ1 for two dissimilar materials, Polyether ether ketone (PEEK) and the armour-grade aluminium alloy 5083 (H32); thereby allowing discussion of limitations of this technique in the context of plate-impact experiments employing Manganin stress gauges. Finally, the experimentally determined values for Γ1 were further refined by comparison between experimental records and numerical simulations carried out using the commercial code ANYSYS Autodyn®.

  14. Model Parameter Estimation Experiment (MOPEX): An overview of science strategy and major results from the second and third workshops

    USGS Publications Warehouse

    Duan, Q.; Schaake, J.; Andreassian, V.; Franks, S.; Goteti, G.; Gupta, H.V.; Gusev, Y.M.; Habets, F.; Hall, A.; Hay, L.; Hogue, T.; Huang, M.; Leavesley, G.; Liang, X.; Nasonova, O.N.; Noilhan, J.; Oudin, L.; Sorooshian, S.; Wagener, T.; Wood, E.F.

    2006-01-01

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrologic models and in land surface parameterization schemes of atmospheric models. The MOPEX science strategy involves three major steps: data preparation, a priori parameter estimation methodology development, and demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrologic basins in the United States (US) and in other countries. This database is being continuously expanded to include more basins in all parts of the world. A number of international MOPEX workshops have been convened to bring together interested hydrologists and land surface modelers from all over world to exchange knowledge and experience in developing a priori parameter estimation techniques. This paper describes the results from the second and third MOPEX workshops. The specific objective of these workshops is to examine the state of a priori parameter estimation techniques and how they can be potentially improved with observations from well-monitored hydrologic basins. Participants of the second and third MOPEX workshops were provided with data from 12 basins in the southeastern US and were asked to carry out a series of numerical experiments using a priori parameters as well as calibrated parameters developed for their respective hydrologic models. Different modeling groups carried out all the required experiments independently using eight different models, and the results from these models have been assembled for analysis in this paper. This paper presents an overview of the MOPEX experiment and its design. The main experimental results are analyzed. A key finding is that existing a priori parameter estimation procedures are problematic and need improvement. Significant improvement of these procedures may be achieved through model calibration of well-monitored hydrologic basins. This paper concludes with a discussion of the lessons learned, and points out further work and future strategy. ?? 2005 Elsevier Ltd. All rights reserved.

  15. Impurity States and diamagnetic susceptibility of a donor in a triangular quantum well

    NASA Astrophysics Data System (ADS)

    Kalpana, P.; Reuben, A. Merwyn Jasper D.; Nithiananthi, P.; Jayakumar, K.

    2017-05-01

    We have calculated the binding energy and the diamagnetic susceptibility(χdia) of the ground (1s) and few low lying excited states (2s and 2p±) in a GaAs/AlxGa1-xAs Triangular Quantum Well (TQW) for the Al composition of x = 0.3. Since the estimation of gives the carrier localization in nanostructured systems and also the calculation of (χdia) involves the , the same has also been estimated as a function of well width. The Schrodinger equation has been solved using variational technique involving Airy functions in the effective mass approximation. The results are presented and discussed.

  16. Crop identification and area estimation over large geographic areas using LANDSAT MSS data

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. LANDSAT MSS data was adequate to accurately identify wheat in Kansas; corn and soybean estimates in Indiana were less accurate. Computer-aided analysis techniques were effectively used to extract crop identification information from LANDSAT data. Systematic sampling of entire counties made possible by computer classification methods resulted in very precise area estimates at county, district, and state levels. Training statistics were successfully extended from one county to other counties having similar crops and soils if the training areas sampled the total variation of the area to be classified.

  17. Projected use of grazed forages in the United States: 2000 to 2050: A technical document supporting the 2000 USDA Forest Service RPA Assessment

    Treesearch

    Larry W. van Tassell; E. Tom Bartlett; John E. Mitchell

    2001-01-01

    Scenario analysis techniques were used to combine projections from 35 grazed forage experts to estimate future forage demand scenarios and examine factors that are anticipated to impact the use of grazed forages in the South, North, and West Regions of the United States. The amount of land available for forage production is projected to decrease in all regions while...

  18. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    NASA Astrophysics Data System (ADS)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  19. Discrete filtering techniques applied to sequential GPS range measurements

    NASA Technical Reports Server (NTRS)

    Vangraas, Frank

    1987-01-01

    The basic navigation solution is described for position and velocity based on range and delta range (Doppler) measurements from NAVSTAR Global Positioning System satellites. The application of discrete filtering techniques is examined to reduce the white noise distortions on the sequential range measurements. A second order (position and velocity states) Kalman filter is implemented to obtain smoothed estimates of range by filtering the dynamics of the signal from each satellite separately. Test results using a simulated GPS receiver show a steady-state noise reduction, the input noise variance divided by the output noise variance, of a factor of four. Recommendations for further noise reduction based on higher order Kalman filters or additional delta range measurements are included.

  20. The National Streamflow Statistics Program: A Computer Program for Estimating Streamflow Statistics for Ungaged Sites

    USGS Publications Warehouse

    Ries(compiler), Kernell G.; With sections by Atkins, J. B.; Hummel, P.R.; Gray, Matthew J.; Dusenbury, R.; Jennings, M.E.; Kirby, W.H.; Riggs, H.C.; Sauer, V.B.; Thomas, W.O.

    2007-01-01

    The National Streamflow Statistics (NSS) Program is a computer program that should be useful to engineers, hydrologists, and others for planning, management, and design applications. NSS compiles all current U.S. Geological Survey (USGS) regional regression equations for estimating streamflow statistics at ungaged sites in an easy-to-use interface that operates on computers with Microsoft Windows operating systems. NSS expands on the functionality of the USGS National Flood Frequency Program, and replaces it. The regression equations included in NSS are used to transfer streamflow statistics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally, the equations were developed on a statewide or metropolitan-area basis as part of cooperative study programs. Equations are available for estimating rural and urban flood-frequency statistics, such as the 1 00-year flood, for every state, for Puerto Rico, and for the island of Tutuila, American Samoa. Equations are available for estimating other statistics, such as the mean annual flow, monthly mean flows, flow-duration percentiles, and low-flow frequencies (such as the 7-day, 0-year low flow) for less than half of the states. All equations available for estimating streamflow statistics other than flood-frequency statistics assume rural (non-regulated, non-urbanized) conditions. The NSS output provides indicators of the accuracy of the estimated streamflow statistics. The indicators may include any combination of the standard error of estimate, the standard error of prediction, the equivalent years of record, or 90 percent prediction intervals, depending on what was provided by the authors of the equations. The program includes several other features that can be used only for flood-frequency estimation. These include the ability to generate flood-frequency plots, and plots of typical flood hydrographs for selected recurrence intervals, estimates of the probable maximum flood, extrapolation of the 500-year flood when an equation for estimating it is not available, and weighting techniques to improve flood-frequency estimates for gaging stations and ungaged sites on gaged streams. This report describes the regionalization techniques used to develop the equations in NSS and provides guidance on the applicability and limitations of the techniques. The report also includes a users manual and a summary of equations available for estimating basin lagtime, which is needed by the program to generate flood hydrographs. The NSS software and accompanying database, and the documentation for the regression equations included in NSS, are available on the Web at http://water.usgs.gov/software/.

  1. A TRMM-Calibrated Infrared Rainfall Algorithm Applied Over Brazil

    NASA Technical Reports Server (NTRS)

    Negri, A. J.; Xu, L.; Adler, R. F.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The development of a satellite infrared technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall in Amazonia are presented. The Convective-Stratiform. Technique, calibrated by coincident, physically retrieved rain rates from the Tropical Rain Measuring Mission (TRMM) Microwave Imager (TMI), is applied during January to April 1999 over northern South America. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall is presented. Results compare well (a one-hour lag) with the diurnal cycle derived from Tropical Ocean-Global Atmosphere (TOGA) radar-estimated rainfall in Rondonia. The satellite estimates reveal that the convective rain constitutes, in the mean, 24% of the rain area while accounting for 67% of the rain volume. The effects of geography (rivers, lakes, coasts) and topography on the diurnal cycle of convection are examined. In particular, the Amazon River, downstream of Manaus, is shown to both enhance early morning rainfall and inhibit afternoon convection. Monthly estimates from this technique, dubbed CST/TMI, are verified over a dense rain gage network in the state of Ceara, in northeast Brazil. The CST/TMI showed a high bias equal to +33% of the gage mean, indicating that possibly the TMI estimates alone are also high. The root mean square difference (after removal of the bias) equaled 36.6% of the gage mean. The correlation coefficient was 0.77 based on 72 station-months.

  2. Arterial Mechanical Motion Estimation Based on a Semi-Rigid Body Deformation Approach

    PubMed Central

    Guzman, Pablo; Hamarneh, Ghassan; Ros, Rafael; Ros, Eduardo

    2014-01-01

    Arterial motion estimation in ultrasound (US) sequences is a hard task due to noise and discontinuities in the signal derived from US artifacts. Characterizing the mechanical properties of the artery is a promising novel imaging technique to diagnose various cardiovascular pathologies and a new way of obtaining relevant clinical information, such as determining the absence of dicrotic peak, estimating the Augmentation Index (AIx), the arterial pressure or the arterial stiffness. One of the advantages of using US imaging is the non-invasive nature of the technique unlike Intra Vascular Ultra Sound (IVUS) or angiography invasive techniques, plus the relative low cost of the US units. In this paper, we propose a semi rigid deformable method based on Soft Bodies dynamics realized by a hybrid motion approach based on cross-correlation and optical flow methods to quantify the elasticity of the artery. We evaluate and compare different techniques (for instance optical flow methods) on which our approach is based. The goal of this comparative study is to identify the best model to be used and the impact of the accuracy of these different stages in the proposed method. To this end, an exhaustive assessment has been conducted in order to decide which model is the most appropriate for registering the variation of the arterial diameter over time. Our experiments involved a total of 1620 evaluations within nine simulated sequences of 84 frames each and the estimation of four error metrics. We conclude that our proposed approach obtains approximately 2.5 times higher accuracy than conventional state-of-the-art techniques. PMID:24871987

  3. Ensemble Kalman Filter for Dynamic State Estimation of Power Grids Stochastically Driven by Time-correlated Mechanical Input Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu

    State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less

  4. Ensemble Kalman Filter for Dynamic State Estimation of Power Grids Stochastically Driven by Time-correlated Mechanical Input Power

    DOE PAGES

    Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu

    2017-10-31

    State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less

  5. Determining the Spatial and Seasonal Variability in OM/OC Ratios across the U.S. Using Multiple Regression

    EPA Science Inventory

    Data from the Interagency Monitoring of Protected Visual Environments (IMPROVE) network are used to estimate organic mass to organic carbon (OM/OC) ratios across the United States by extending previously published multiple regression techniques. Our new methodology addresses com...

  6. Billy J. Roberts | NREL

    Science.gov Websites

    technique Wildlife and energy technology interactions Geothermal technology Education M.S., Certificate in . Estimate of the Geothermal Energy Resource in the Major Sedimentary Basins in the United States. Paper the Systems Modeling & Geospatial Data Science Group in the Strategic Energy Analysis Center

  7. System identification principles in studies of forest dynamics.

    Treesearch

    Rolfe A. Leary

    1970-01-01

    Shows how it is possible to obtain governing equation parameter estimates on the basis of observed system states. The approach used represents a constructive alternative to regression techniques for models expressed as differential equations. This approach allows scientists to more completely quantify knowledge of forest development processes, to express theories in...

  8. Contingent valuation of fuel hazard reduction treatments

    Treesearch

    John B. Loomis; Armando Gonzalez-Caban

    2008-01-01

    This chapter presents a stated preference technique for estimating the public benefits of reducing wildfires to residents of California, Florida, and Montana from two alternative fuel reduction programs: prescribed burning, and mechanical fuels reduction. The two fuel reduction programs under study are quite relevant to people living in California, Florida, and...

  9. TRANSFERRING TECHNOLOGIES, TOOLS AND TECHNIQUES: THE NATIONAL COASTAL ASSESSMENT

    EPA Science Inventory

    The purpose of the National Coastal Assessment (NCA) is to estimate the status and trends of the condition of the nation's coastal resources on a state, regional and national basis. Based on NCA monitoring from 1999-2001, 100% of the nation's estuarine waters (at over 2500 locati...

  10. Accuracy assessment: The statistical approach to performance evaluation in LACIE. [Great Plains corridor, United States

    NASA Technical Reports Server (NTRS)

    Houston, A. G.; Feiveson, A. H.; Chhikara, R. S.; Hsu, E. M. (Principal Investigator)

    1979-01-01

    A statistical methodology was developed to check the accuracy of the products of the experimental operations throughout crop growth and to determine whether the procedures are adequate to accomplish the desired accuracy and reliability goals. It has allowed the identification and isolation of key problems in wheat area yield estimation, some of which have been corrected and some of which remain to be resolved. The major unresolved problem in accuracy assessment is that of precisely estimating the bias of the LACIE production estimator. Topics covered include: (1) evaluation techniques; (2) variance and bias estimation for the wheat production estimate; (3) the 90/90 evaluation; (4) comparison of the LACIE estimate with reference standards; and (5) first and second order error source investigations.

  11. Comparison of techniques for estimating annual lake evaporation using climatological data

    USGS Publications Warehouse

    Andersen, M.E.; Jobson, H.E.

    1982-01-01

    Mean annual evaporation estimates were determined for 30 lakes by use of a numerical model (Morton, 1979) and by use of an evaporation map prepared by the U.S. Weather Service (Kohler et al., 1959). These estimates were compared to the reported value of evaporation determined from measurements on each lake. Various lengths of observation and methods of measurement were used among the 30 lakes. The evaporation map provides annual evaporation estimates which are more consistent with observations than those determined by use of the numerical model. The map cannot provide monthly estimates, however, and is only available for the contiguous United States. The numerical model can provide monthly estimates for shallow lakes and is based on monthly observations of temperature, humidity, and sunshine duration.

  12. Improved estimation of subject-level functional connectivity using full and partial correlation with empirical Bayes shrinkage.

    PubMed

    Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A

    2018-05-15

    Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully when using partial correlations. Copyright © 2018. Published by Elsevier Inc.

  13. Real-time 3D reconstruction of road curvature in far look-ahead distance from analysis of image sequences

    NASA Astrophysics Data System (ADS)

    Behringer, Reinhold

    1995-12-01

    A system for visual road recognition in far look-ahead distance, implemented in the autonomous road vehicle VaMP (a passenger car), is described. Visual cues of a road in a video image are the bright lane markings and the edges formed at the road borders. In a distance of more than 100 m, the most relevant road cue is the homogeneous road area, limited by the two border edges. These cues can be detected by the image processing module KRONOS applying edge detection techniques and areal 2D segmentation based on resolution triangles (analogous to a resolution pyramid). An estimation process performs an update of a state vector, which describes spatial road shape and vehicle orientation relative to the road. This state vector is estimated every 40 ms by exploiting knowledge about the vehicle movement (spatio-temporal model of vehicle dynamics) and the road design rules (clothoidal segments). Kalman filter techniques are applied to obtain an optimal estimate of the state vector by evaluating the measurements of the road border positions in the image sequence taken by a set of CCD cameras. The road consists of segments with piecewise constant curvature parameters. The borders between these segments can be detected by applying methods which have been developed for detection of discontinuities during time-discrete measurements. The road recognition system has been tested in autonomous rides with VaMP on public Autobahnen in real traffic at speeds up to 130 km/h.

  14. Joint sparsity based heterogeneous data-level fusion for target detection and estimation

    NASA Astrophysics Data System (ADS)

    Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe

    2017-05-01

    Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.

  15. Estimation for general birth-death processes

    PubMed Central

    Crawford, Forrest W.; Minin, Vladimir N.; Suchard, Marc A.

    2013-01-01

    Birth-death processes (BDPs) are continuous-time Markov chains that track the number of “particles” in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution. PMID:25328261

  16. Estimation for general birth-death processes.

    PubMed

    Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A

    2014-04-01

    Birth-death processes (BDPs) are continuous-time Markov chains that track the number of "particles" in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution.

  17. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    PubMed

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  18. Status of the desert tortoise in Red Rock Canyon State Park

    USGS Publications Warehouse

    Berry, Kristin H.; Keith, Kevin; Bailey, Tracy Y.

    2008-01-01

    We surveyed for desert tortoises, Gopherus agassizii, in the western part of Red Rock Canyon State Park and watershed in eastern Kern County, California, between 2002 and 2004. We used two techniques: a single demographic plot (~4 km2 ) and 37 landscape plots (1-ha each). We estimated population densities of tortoises to be between 2.7 and 3.57/km2 and the population in the Park to be 108 tortoises. We estimated the death rate at 67% for subadults and adults during the last 4 yrs. Mortality was high for several reasons: gunshot deaths, avian predation, mammalian predation, and probably disease. Historic and recent anthropogenic impacts from State Highway 14, secondary roads, trash, cross-country vehicle tracks, and livestock have contributed to elevated death rates and degradation of habitat. We propose conservation actions to reduce mortality.

  19. Automated Quantitative Nuclear Cardiology Methods

    PubMed Central

    Motwani, Manish; Berman, Daniel S.; Germano, Guido; Slomka, Piotr J.

    2016-01-01

    Quantitative analysis of SPECT and PET has become a major part of nuclear cardiology practice. Current software tools can automatically segment the left ventricle, quantify function, establish myocardial perfusion maps and estimate global and local measures of stress/rest perfusion – all with minimal user input. State-of-the-art automated techniques have been shown to offer high diagnostic accuracy for detecting coronary artery disease, as well as predict prognostic outcomes. This chapter briefly reviews these techniques, highlights several challenges and discusses the latest developments. PMID:26590779

  20. Estimating flood hydrographs for urban basins in North Carolina

    USGS Publications Warehouse

    Mason, R.R.; Bales, J.D.

    1996-01-01

    A dimensionless hydrograph for North Carolina was developed from data collected in 29 urban and urbanizing basins in the State. The dimen- sionless hydrograph can be used with an estimate of peak flow and basin lagtime to synthesize a design flood hydrograph for urban basins in North Carolina. Peak flows can be estimated from a number of avail- able techniques; a procedure for estimating basin lagtime from main channel length, stream slope, and percentage of impervious area was developed from data collected at 50 sites and is presented in this report. The North Carolina dimensionless hydrograph provides satis- factory predictions of flood hydrographs in all regions of the State except for basins in or near Asheville where the method overestimated 11 of 12 measured hydrographs. A previously developed dimensionless hydrograph for urban basins in the Piedmont and upper Coastal Plain of South Carolina provides better flood-hydrograph predictions for the Asheville basins and has a standard error of 21 percent as compared to 41 percent for the North Carolina dimensionless hydrograph.

  1. Analysis technique for controlling system wavefront error with active/adaptive optics

    NASA Astrophysics Data System (ADS)

    Genberg, Victor L.; Michels, Gregory J.

    2017-08-01

    The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  2. Estimating distribution and connectivity of recolonizing American marten in the northeastern United States using expert elicitation techniques

    USGS Publications Warehouse

    Aylward, C.M.; Murdoch, J.D.; Donovan, Therese M.; Kilpatrick, C.W.; Bernier, C.; Katz, J.

    2018-01-01

    The American marten Martes americana is a species of conservation concern in the northeastern United States due to widespread declines from over‐harvesting and habitat loss. Little information exists on current marten distribution and how landscape characteristics shape patterns of occupancy across the region, which could help develop effective recovery strategies. The rarity of marten and lack of historical distribution records are also problematic for region‐wide conservation planning. Expert opinion can provide a source of information for estimating species–landscape relationships and is especially useful when empirical data are sparse. We created a survey to elicit expert opinion and build a model that describes marten occupancy in the northeastern United States as a function of landscape conditions. We elicited opinions from 18 marten experts that included wildlife managers, trappers and researchers. Each expert estimated occupancy probability at 30 sites in their geographic region of expertise. We, then, fit the response data with a set of 58 models that incorporated the effects of covariates related to forest characteristics, climate, anthropogenic impacts and competition at two spatial scales (1.5 and 5 km radii), and used model selection techniques to determine the best model in the set. Three top models had strong empirical support, which we model averaged based on AIC weights. The final model included effects of five covariates at the 5‐km scale: percent canopy cover (positive), percent spruce‐fir land cover (positive), winter temperature (negative), elevation (positive) and road density (negative). A receiver operating characteristic curve indicated that the model performed well based on recent occurrence records. We mapped distribution across the region and used circuit theory to estimate movement corridors between isolated core populations. The results demonstrate the effectiveness of expert‐opinion data at modeling occupancy for rare species and provide tools for planning marten recovery in the northeastern United States.

  3. Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data

    USGS Publications Warehouse

    Gebert, Warren A.; Walker, John F.; Kennedy, James L.

    2011-01-01

    Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.

  4. Bayesian nonparametric estimation of EQ-5D utilities for United States using the existing United Kingdom data.

    PubMed

    Kharroubi, Samer A

    2017-10-06

    Valuations of health state descriptors such as EQ-5D or SF6D have been conducted in different countries. There is a scope to make use of the results in one country as informative priors to help with the analysis of a study in another, for this to enable better estimation to be obtained in the new country than analyzing its data separately. Data from 2 EQ-5D valuation studies were analyzed using the time trade-off technique, where values for 42 health states were devised from representative samples of the UK and US populations. A Bayesian non-parametric approach has been applied to predict the health utilities of the US population, where the UK results were used as informative priors in the model to improve their estimation. The findings showed that employing additional information from the UK data helped in the production of US utility estimates much more precisely than would have been possible using the US study data alone. It is very plausible that this method would serve useful in countries where the conduction of large evaluation studies is not very feasible.

  5. An adaptive transmission protocol for managing dynamic shared states in collaborative surgical simulation.

    PubMed

    Qin, J; Choi, K S; Ho, Simon S M; Heng, P A

    2008-01-01

    A force prediction algorithm is proposed to facilitate virtual-reality (VR) based collaborative surgical simulation by reducing the effect of network latencies. State regeneration is used to correct the estimated prediction. This algorithm is incorporated into an adaptive transmission protocol in which auxiliary features such as view synchronization and coupling control are equipped to ensure the system consistency. We implemented this protocol using multi-threaded technique on a cluster-based network architecture.

  6. Flight test evaluation of predicted light aircraft drag, performance, and stability

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.; Fox, S. R.

    1979-01-01

    A technique was developed which permits simultaneous extraction of complete lift, drag, and thrust power curves from time histories of a single aircraft maneuver such as a pullup (from V sub max to V sub stall) and pushover (to sub V max for level flight.) The technique is an extension to non-linear equations of motion of the parameter identification methods of lliff and Taylor and includes provisions for internal data compatibility improvement as well. The technique was show to be capable of correcting random errors in the most sensitive data channel and yielding highly accurate results. This technique was applied to flight data taken on the ATLIT aircraft. The drag and power values obtained from the initial least squares estimate are about 15% less than the 'true' values. If one takes into account the rather dirty wing and fuselage existing at the time of the tests, however, the predictions are reasonably accurate. The steady state lift measurements agree well with the extracted values only for small values of alpha. The predicted value of the lift at alpha = 0 is about 33% below that found in steady state tests while the predicted lift slope is 13% below the steady state value.

  7. A review on prognostics and health monitoring of Li-ion battery

    NASA Astrophysics Data System (ADS)

    Zhang, Jingliang; Lee, Jay

    2011-08-01

    The functionality and reliability of Li-ion batteries as major energy storage devices have received more and more attention from a wide spectrum of stakeholders, including federal/state policymakers, business leaders, technical researchers, environmental groups and the general public. Failures of Li-ion battery not only result in serious inconvenience and enormous replacement/repair costs, but also risk catastrophic consequences such as explosion due to overheating and short circuiting. In order to prevent severe failures from occurring, and to optimize Li-ion battery maintenance schedules, breakthroughs in prognostics and health monitoring of Li-ion batteries, with an emphasis on fault detection, correction and remaining-useful-life prediction, must be achieved. This paper reviews various aspects of recent research and developments in Li-ion battery prognostics and health monitoring, and summarizes the techniques, algorithms and models used for state-of-charge (SOC) estimation, current/voltage estimation, capacity estimation and remaining-useful-life (RUL) prediction.

  8. State-space self-tuner for on-line adaptive control

    NASA Technical Reports Server (NTRS)

    Shieh, L. S.

    1994-01-01

    Dynamic systems, such as flight vehicles, satellites and space stations, operating in real environments, constantly face parameter and/or structural variations owing to nonlinear behavior of actuators, failure of sensors, changes in operating conditions, disturbances acting on the system, etc. In the past three decades, adaptive control has been shown to be effective in dealing with dynamic systems in the presence of parameter uncertainties, structural perturbations, random disturbances and environmental variations. Among the existing adaptive control methodologies, the state-space self-tuning control methods, initially proposed by us, are shown to be effective in designing advanced adaptive controllers for multivariable systems. In our approaches, we have embedded the standard Kalman state-estimation algorithm into an online parameter estimation algorithm. Thus, the advanced state-feedback controllers can be easily established for digital adaptive control of continuous-time stochastic multivariable systems. A state-space self-tuner for a general multivariable stochastic system has been developed and successfully applied to the space station for on-line adaptive control. Also, a technique for multistage design of an optimal momentum management controller for the space station has been developed and reported in. Moreover, we have successfully developed various digital redesign techniques which can convert a continuous-time controller to an equivalent digital controller. As a result, the expensive and unreliable continuous-time controller can be implemented using low-cost and high performance microprocessors. Recently, we have developed a new hybrid state-space self tuner using a new dual-rate sampling scheme for on-line adaptive control of continuous-time uncertain systems.

  9. Improving Estimates and Forecasts of Lake Carbon Pools and Fluxes Using Data Assimilation

    NASA Astrophysics Data System (ADS)

    Zwart, J. A.; Hararuk, O.; Prairie, Y.; Solomon, C.; Jones, S.

    2017-12-01

    Lakes are biogeochemical hotspots on the landscape, contributing significantly to the global carbon cycle despite their small areal coverage. Observations and models of lake carbon pools and fluxes are rarely explicitly combined through data assimilation despite significant use of this technique in other fields with great success. Data assimilation adds value to both observations and models by constraining models with observations of the system and by leveraging knowledge of the system formalized by the model to objectively fill information gaps. In this analysis, we highlight the utility of data assimilation in lake carbon cycling research by using the Ensemble Kalman Filter to combine simple lake carbon models with observations of lake carbon pools. We demonstrate the use of data assimilation to improve a model's representation of lake carbon dynamics, to reduce uncertainty in estimates of lake carbon pools and fluxes, and to improve the accuracy of carbon pool size estimates relative to estimates derived from observations alone. Data assimilation techniques should be embraced as valuable tools for lake biogeochemists interested in learning about ecosystem dynamics and forecasting ecosystem states and processes.

  10. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    NASA Technical Reports Server (NTRS)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  11. Economic implications of current systems

    NASA Technical Reports Server (NTRS)

    Daniel, R. E.; Aster, R. W.

    1983-01-01

    The primary goals of this study are to estimate the value of R&D to photovoltaic (PV) metallization systems cost, and to provide a method for selecting an optimal metallization method for any given PV system. The value-added cost and relative electrical performance of 25 state-of-the-art (SOA) and advanced metallization system techniques are compared.

  12. Validation of Passive Sampling Devices for Monitoring of Munitions Constituents in Underwater Environments

    DTIC Science & Technology

    2017-06-30

    Research and Development Program [SERDP] project #ER-2542) into the canister would provide enhancement of the quantitative estimation of the TWA...7 4. Advantages and limitations compared to other sampling techniques...Department of Defense EOD Explosive Ordnance Disposal EPA United States Environmental Protection Agency EQL Environmental Quantitation Limit EST

  13. Model transformations for state-space self-tuning control of multivariable stochastic systems

    NASA Technical Reports Server (NTRS)

    Shieh, Leang S.; Bao, Yuan L.; Coleman, Norman P.

    1988-01-01

    The design of self-tuning controllers for multivariable stochastic systems is considered analytically. A long-division technique for finding the similarity transformation matrix and transforming the estimated left MFD to the right MFD is developed; the derivation is given in detail, and the procedures involved are briefly characterized.

  14. A systematic review of lumped-parameter equivalent circuit models for real-time estimation of lithium-ion battery states

    NASA Astrophysics Data System (ADS)

    Nejad, S.; Gladwin, D. T.; Stone, D. A.

    2016-06-01

    This paper presents a systematic review for the most commonly used lumped-parameter equivalent circuit model structures in lithium-ion battery energy storage applications. These models include the Combined model, Rint model, two hysteresis models, Randles' model, a modified Randles' model and two resistor-capacitor (RC) network models with and without hysteresis included. Two variations of the lithium-ion cell chemistry, namely the lithium-ion iron phosphate (LiFePO4) and lithium nickel-manganese-cobalt oxide (LiNMC) are used for testing purposes. The model parameters and states are recursively estimated using a nonlinear system identification technique based on the dual Extended Kalman Filter (dual-EKF) algorithm. The dynamic performance of the model structures are verified using the results obtained from a self-designed pulsed-current test and an electric vehicle (EV) drive cycle based on the New European Drive Cycle (NEDC) profile over a range of operating temperatures. Analysis on the ten model structures are conducted with respect to state-of-charge (SOC) and state-of-power (SOP) estimation with erroneous initial conditions. Comparatively, both RC model structures provide the best dynamic performance, with an outstanding SOC estimation accuracy. For those cell chemistries with large inherent hysteresis levels (e.g. LiFePO4), the RC model with only one time constant is combined with a dynamic hysteresis model to further enhance the performance of the SOC estimator.

  15. A closed-form solution to tensor voting: theory and applications.

    PubMed

    Wu, Tai-Pang; Yeung, Sai-Kit; Jia, Jiaya; Tang, Chi-Keung; Medioni, Gérard

    2012-08-01

    We prove a closed-form solution to tensor voting (CFTV): Given a point set in any dimensions, our closed-form solution provides an exact, continuous, and efficient algorithm for computing a structure-aware tensor that simultaneously achieves salient structure detection and outlier attenuation. Using CFTV, we prove the convergence of tensor voting on a Markov random field (MRF), thus termed as MRFTV, where the structure-aware tensor at each input site reaches a stationary state upon convergence in structure propagation. We then embed structure-aware tensor into expectation maximization (EM) for optimizing a single linear structure to achieve efficient and robust parameter estimation. Specifically, our EMTV algorithm optimizes both the tensor and fitting parameters and does not require random sampling consensus typically used in existing robust statistical techniques. We performed quantitative evaluation on its accuracy and robustness, showing that EMTV performs better than the original TV and other state-of-the-art techniques in fundamental matrix estimation for multiview stereo matching. The extensions of CFTV and EMTV for extracting multiple and nonlinear structures are underway.

  16. Extended state observer based robust adaptive control on SE(3) for coupled spacecraft tracking maneuver with actuator saturation and misalignment

    NASA Astrophysics Data System (ADS)

    Zhang, Jianqiao; Ye, Dong; Sun, Zhaowei; Liu, Chuang

    2018-02-01

    This paper presents a robust adaptive controller integrated with an extended state observer (ESO) to solve coupled spacecraft tracking maneuver in the presence of model uncertainties, external disturbances, actuator uncertainties including magnitude deviation and misalignment, and even actuator saturation. More specifically, employing the exponential coordinates on the Lie group SE(3) to describe configuration tracking errors, the coupled six-degrees-of-freedom (6-DOF) dynamics are developed for spacecraft relative motion, in which a generic fully actuated thruster distribution is considered and the lumped disturbances are reconstructed by using anti-windup technique. Then, a novel ESO, developed via second order sliding mode (SOSM) technique and adding linear correction terms to improve the performance, is designed firstly to estimate the disturbances in finite time. Based on the estimated information, an adaptive fast terminal sliding mode (AFTSM) controller is developed to guarantee the almost global asymptotic stability of the resulting closed-loop system such that the trajectory can be tracked with all the aforementioned drawbacks addressed simultaneously. Finally, the effectiveness of the controller is illustrated through numerical examples.

  17. Dense depth maps from correspondences derived from perceived motion

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2017-01-01

    Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.

  18. Experimental correlations for transient soot measurement in diesel exhaust aerosol with light extinction, electrical mobility and diffusion charger sensor techniques

    NASA Astrophysics Data System (ADS)

    Bermúdez, Vicente; Pastor, José V.; López, J. Javier; Campos, Daniel

    2014-06-01

    A study of soot measurement deviation using a diffusion charger sensor with three dilution ratios was conducted in order to obtain an optimum setting that can be used to obtain accurate measurements in terms of soot mass emitted by a light-duty diesel engine under transient operating conditions. The paper includes three experimental phases: an experimental validation of the measurement settings in steady-state operating conditions; evaluation of the proposed setting under the New European Driving Cycle; and a study of correlations for different measurement techniques. These correlations provide a reliable tool for estimating soot emission from light extinction measurement or from accumulation particle mode concentration. There are several methods and correlations to estimate soot concentration in the literature but most of them were assessed for steady-state operating points. In this case, the correlations are obtained by more than 4000 points measured in transient conditions. The results of the new two correlations, with less than 4% deviation from the reference measurement, are presented in this paper.

  19. State estimator for multisensor systems with irregular sampling and time-varying delays

    NASA Astrophysics Data System (ADS)

    Peñarrocha, I.; Sanchis, R.; Romero, J. A.

    2012-08-01

    This article addresses the state estimation in linear time-varying systems with several sensors with different availability, randomly sampled in time and whose measurements have a time-varying delay. The approach is based on a modification of the Kalman filter with the negative-time measurement update strategy, avoiding running back the full standard Kalman filter, the use of full augmented order models or the use of reorganisation techniques, leading to a lower implementation cost algorithm. The update equations are run every time a new measurement is available, independently of the time when it was taken. The approach is useful for networked control systems, systems with long delays and scarce measurements and for out-of-sequence measurements.

  20. Adaptive disturbance compensation finite control set optimal control for PMSM systems based on sliding mode extended state observer

    NASA Astrophysics Data System (ADS)

    Wu, Yun-jie; Li, Guo-fei

    2018-01-01

    Based on sliding mode extended state observer (SMESO) technique, an adaptive disturbance compensation finite control set optimal control (FCS-OC) strategy is proposed for permanent magnet synchronous motor (PMSM) system driven by voltage source inverter (VSI). So as to improve robustness of finite control set optimal control strategy, a SMESO is proposed to estimate the output-effect disturbance. The estimated value is fed back to finite control set optimal controller for implementing disturbance compensation. It is indicated through theoretical analysis that the designed SMESO could converge in finite time. The simulation results illustrate that the proposed adaptive disturbance compensation FCS-OC possesses better dynamical response behavior in the presence of disturbance.

  1. Electroencephalography signatures of attention-deficit/hyperactivity disorder: clinical utility.

    PubMed

    Alba, Guzmán; Pereda, Ernesto; Mañas, Soledad; Méndez, Leopoldo D; González, Almudena; González, Julián J

    2015-01-01

    The techniques and the most important results on the use of electroencephalography (EEG) to extract different measures are reviewed in this work, which can be clinically useful to study subjects with attention-deficit/hyperactivity disorder (ADHD). First, we discuss briefly and in simple terms the EEG analysis and processing techniques most used in the context of ADHD. We review techniques that both analyze individual EEG channels (univariate measures) and study the statistical interdependence between different EEG channels (multivariate measures), the so-called functional brain connectivity. Among the former ones, we review the classical indices of absolute and relative spectral power and estimations of the complexity of the channels, such as the approximate entropy and the Lempel-Ziv complexity. Among the latter ones, we focus on the magnitude square coherence and on different measures based on the concept of generalized synchronization and its estimation in the state space. Second, from a historical point of view, we present the most important results achieved with these techniques and their clinical utility (sensitivity, specificity, and accuracy) to diagnose ADHD. Finally, we propose future research lines based on these results.

  2. A comparison of Hong Kong and United Kingdom SF-6D health states valuations using a nonparametric Bayesian method.

    PubMed

    Kharroubi, Samer A; Brazier, John E; McGhee, Sarah

    2014-06-01

    There is interest in the extent to which valuations of health may differ between different countries and cultures, but few studies have compared preference values of health states obtained in different countries. The present study applies a nonparametric model to estimate and compare two HK and UK standard gamble values for six-dimensional health state short form (derived from short-form 36 health survey) (SF-6D) health states using Bayesian methods. The data set is the HK and UK SF-6D valuation studies in which two samples of 197 and 249 states defined by the SF-6D were valued by representative samples of the HK and UK general populations, respectively, both using the standard gamble technique. We estimated a function applicable across both countries that explicitly accounts for the differences between them, and is estimated using the data from both countries. The results suggest that differences in SF-6D health state valuations between the UK and HK general populations are potentially important. In particular, the valuations of Hong Kong were meaningfully higher than those of the United Kingdom for most of the selected SF-6D health states. The magnitude of these country-specific differences in health state valuation depended, however, in a complex way on the levels of individual dimensions. The new Bayesian nonparametric method is a powerful approach for analyzing data from multiple nationalities or ethnic groups to understand the differences between them and potentially to estimate the underlying utility functions more efficiently. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  3. Pre- and postprocessing techniques for determining goodness of computational meshes

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley; Westermann, T.; Bass, J. M.

    1993-01-01

    Research in error estimation, mesh conditioning, and solution enhancement for finite element, finite difference, and finite volume methods has been incorporated into AUDITOR, a modern, user-friendly code, which operates on 2D and 3D unstructured neutral files to improve the accuracy and reliability of computational results. Residual error estimation capabilities provide local and global estimates of solution error in the energy norm. Higher order results for derived quantities may be extracted from initial solutions. Within the X-MOTIF graphical user interface, extensive visualization capabilities support critical evaluation of results in linear elasticity, steady state heat transfer, and both compressible and incompressible fluid dynamics.

  4. NEXRAD quantitative precipitation estimates, data acquisition, and processing for the DuPage County, Illinois, streamflow-simulation modeling system

    USGS Publications Warehouse

    Ortel, Terry W.; Spies, Ryan R.

    2015-11-19

    Next-Generation Radar (NEXRAD) has become an integral component in the estimation of precipitation (Kitzmiller and others, 2013). The high spatial and temporal resolution of NEXRAD has revolutionized the ability to estimate precipitation across vast regions, which is especially beneficial in areas without a dense rain-gage network. With the improved precipitation estimates, hydrologic models can produce reliable streamflow forecasts for areas across the United States. NEXRAD data from the National Weather Service (NWS) has been an invaluable tool used by the U.S. Geological Survey (USGS) for numerous projects and studies; NEXRAD data processing techniques similar to those discussed in this Fact Sheet have been developed within the USGS, including the NWS Quantitative Precipitation Estimates archive developed by Blodgett (2013).

  5. Ring Current Pressure Estimation withRAM-SCB using Data Assimilation and VanAllen Probe Flux Data

    NASA Astrophysics Data System (ADS)

    Godinez, H. C.; Yu, Y.; Henderson, M. G.; Larsen, B.; Jordanova, V.

    2015-12-01

    Capturing and subsequently modeling the influence of tail plasma injections on the inner magnetosphere is particularly important for understanding the formation and evolution of Earth's ring current. In this study, the ring current distribution is estimated with the Ring Current-Atmosphere Interactions Model with Self-Consistent Magnetic field (RAM-SCB) using, for the first time, data assimilation techniques and particle flux data from the Van Allen Probes. The state of the ring current within the RAM-SCB is corrected via an ensemble based data assimilation technique by using proton flux from one of the Van Allen Probes, to capture the enhancement of ring current following an isolated substorm event on July 18 2013. The results show significant improvement in the estimation of the ring current particle distributions in the RAM-SCB model, leading to better agreement with observations. This newly implemented data assimilation technique in the global modeling of the ring current thus provides a promising tool to better characterize the effect of substorm injections in the near-Earth regions. The work is part of the Space Hazards Induced near Earth by Large, Dynamic Storms (SHIELDS) project in Los Alamos National Laboratory.

  6. Differentially Private Synthesization of Multi-Dimensional Data using Copula Functions

    PubMed Central

    Li, Haoran; Xiong, Li; Jiang, Xiaoqian

    2014-01-01

    Differential privacy has recently emerged in private statistical data release as one of the strongest privacy guarantees. Most of the existing techniques that generate differentially private histograms or synthetic data only work well for single dimensional or low-dimensional histograms. They become problematic for high dimensional and large domain data due to increased perturbation error and computation complexity. In this paper, we propose DPCopula, a differentially private data synthesization technique using Copula functions for multi-dimensional data. The core of our method is to compute a differentially private copula function from which we can sample synthetic data. Copula functions are used to describe the dependence between multivariate random vectors and allow us to build the multivariate joint distribution using one-dimensional marginal distributions. We present two methods for estimating the parameters of the copula functions with differential privacy: maximum likelihood estimation and Kendall’s τ estimation. We present formal proofs for the privacy guarantee as well as the convergence property of our methods. Extensive experiments using both real datasets and synthetic datasets demonstrate that DPCopula generates highly accurate synthetic multi-dimensional data with significantly better utility than state-of-the-art techniques. PMID:25405241

  7. Estimation et validation des derivees de stabilite et controle du modele dynamique non-lineaire d'un drone a voilure fixe

    NASA Astrophysics Data System (ADS)

    Courchesne, Samuel

    Knowledge of the dynamic characteristics of a fixed-wing UAV is necessary to design flight control laws and to conceive a high quality flight simulator. The basic features of a flight mechanic model include the properties of mass, inertia and major aerodynamic terms. They respond to a complex process involving various numerical analysis techniques and experimental procedures. This thesis focuses on the analysis of estimation techniques applied to estimate problems of stability and control derivatives from flight test data provided by an experimental UAV. To achieve this objective, a modern identification methodology (Quad-M) is used to coordinate the processing tasks from multidisciplinary fields, such as parameter estimation modeling, instrumentation, the definition of flight maneuvers and validation. The system under study is a non-linear model with six degrees of freedom with a linear aerodynamic model. The time domain techniques are used for identification of the drone. The first technique, the equation error method is used to determine the structure of the aerodynamic model. Thereafter, the output error method and filter error method are used to estimate the aerodynamic coefficients values. The Matlab scripts for estimating the parameters obtained from the American Institute of Aeronautics and Astronautics (AIAA) are used and modified as necessary to achieve the desired results. A commendable effort in this part of research is devoted to the design of experiments. This includes an awareness of the system data acquisition onboard and the definition of flight maneuvers. The flight tests were conducted under stable flight conditions and with low atmospheric disturbance. Nevertheless, the identification results showed that the filter error method is most effective for estimating the parameters of the drone due to the presence of process noise and measurement. The aerodynamic coefficients are validated using a numerical analysis of the vortex method. In addition, a simulation model incorporating the estimated parameters is used to compare the behavior of states measured. Finally, a good correspondence between the results is demonstrated despite a limited number of flight data. Keywords: drone, identification, estimation, nonlinear, flight test, system, aerodynamic coefficient.

  8. Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation

    NASA Astrophysics Data System (ADS)

    Demir, Uygar; Toker, Cenk; Çenet, Duygu

    2016-07-01

    Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent GNSS Network) network. This study is supported by by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.

  9. Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.

    PubMed

    Soleimani, Hossein; Hensman, James; Saria, Suchi

    2017-08-21

    Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.

  10. Investigations of the drift mobility of carriers and density of states in nanocrystalline CdS thin films

    NASA Astrophysics Data System (ADS)

    Singh, Baljinder; Singh, Janpreet; Kaur, Jagdish; Moudgil, R. K.; Tripathi, S. K.

    2016-06-01

    Nanocrystalline Cadmium Sulfide (nc-CdS) thin films have been prepared on well-cleaned glass substrate at room temperature (300 K) by thermal evaporation technique using inert gas condensation (IGC) method. X-ray diffraction (XRD) analysis reveals that the films crystallize in hexagonal structure with preferred orientation along [002] direction. Scanning electron microscope (SEM) and Transmission electron microscope (TEM) studies reveal that grains are spherical in shape and uniformly distributed over the glass substrates. The optical band gap of the film is estimated from the transmittance spectra. Electrical parameters such as Hall coefficient, carrier type, carrier concentration, resistivity and mobility are determined using Hall measurements at 300 K. Transit time and mobility are estimated from Time of Flight (TOF) transient photocurrent technique in gap cell configuration. The measured values of electron drift mobility from TOF and Hall measurements are of the same order. Constant Photocurrent Method in ac-mode (ac-CPM) is used to measure the absorption spectra in low absorption region. By applying derivative method, we have converted the measured absorption data into a density of states (DOS) distribution in the lower part of the energy gap. The value of Urbach energy, steepness parameter and density of defect states have been calculated from the absorption and DOS spectra.

  11. Estimating zero strain states of very soft tissue under gravity loading using digital image correlation⋆,⋆⋆,★

    PubMed Central

    Gao, Zhan; Desai, Jaydev P.

    2009-01-01

    This paper presents several experimental techniques and concepts in the process of measuring mechanical properties of very soft tissue in an ex vivo tensile test. Gravitational body force on very soft tissue causes pre-compression and results in a non-uniform initial deformation. The global Digital Image Correlation technique is used to measure the full field deformation behavior of liver tissue in uniaxial tension testing. A maximum stretching band is observed in the incremental strain field when a region of tissue passes from compression and enters a state of tension. A new method for estimating the zero strain state is proposed: the zero strain position is close to, but ahead of the position of the maximum stretching band, or in other words, the tangent of a nominal stress-stretch curve reaches minimum at λ ≳ 1. The approach, to identify zero strain by using maximum incremental strain, can be implemented in other types of image-based soft tissue analysis. The experimental results of ten samples from seven porcine livers are presented and material parameters for the Ogden model fit are obtained. The finite element simulation based on the fitted model confirms the effect of gravity on the deformation of very soft tissue and validates our approach. PMID:20015676

  12. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

  13. Method for Estimating Water Withdrawals for Livestock in the United States, 2005

    USGS Publications Warehouse

    Lovelace, John K.

    2009-01-01

    Livestock water use includes ground water and surface water associated with livestock watering, feedlots, dairy operations, and other on-farm needs. The water may be used for drinking, cooling, sanitation, waste disposal, and other needs related to the animals. Estimates of water withdrawals for livestock are needed for water planning and management. This report documents a method used to estimate withdrawals of fresh ground water and surface water for livestock in 2005 for each county and county equivalent in the United States, Puerto Rico, and the U.S. Virgin Islands. Categories of livestock included dairy cattle, beef and other cattle, hogs and pigs, laying hens, broilers and other chickens, turkeys, sheep and lambs, all goats, and horses (including ponies, mules, burros, and donkeys). Use of the method described in this report could result in more consistent water-withdrawal estimates for livestock that can be used by water managers and planners to determine water needs and trends across the United States. Water withdrawals for livestock in 2005 were estimated by using water-use coefficients, in gallons per head per day for each animal type, and livestock-population data. Coefficients for various livestock for most States were obtained from U.S. Geological Survey water-use program personnel or U.S. Geological Survey water-use publications. When no coefficient was available for an animal type in a State, the median value of reported coefficients for that animal was used. Livestock-population data were provided by the National Agricultural Statistics Service. County estimates were further divided into ground-water and surface-water withdrawals for each county and county equivalent. County totals from 2005 were compared to county totals from 1995 and 2000. Large deviations from 1995 or 2000 livestock withdrawal estimates were investigated and generally were due to comparison with reported withdrawals, differences in estimation techniques, differences in livestock coefficients, or use of livestock-population data from different sources. The results of this study were distributed to U.S. Geological Survey water-use program personnel in each State during 2007. Water-use program personnel are required to submit estimated withdrawals for all categories of use in their States to the National Water-Use Information Program for inclusion in a national report describing water use in the United States during 2005. Water-use program personnel had the option of submitting these estimates, a modified version of these estimates, or their own set of estimates or reported data. Estimated withdrawals resulting from the method described in this report are not presented herein to avoid potential inconsistencies with estimated withdrawals for livestock that will be presented in the national report, as different methods used by water-use personnel may result in different withdrawal estimates. Estimated withdrawals also are not presented to avoid potential disclosure of data for individual livestock operations.

  14. Guidelines for preparation of State water-use estimates for 2005

    USGS Publications Warehouse

    Hutson, Susan S.

    2007-01-01

    The U.S. Geological Survey (USGS) has estimated the use of water in the United States at 5-year intervals since 1950. This report describes the water-use categories and data elements required for the 2005 national water-use compilation conducted as part of the USGS National Water Use Information Program. The report identifies sources of water-use information, provides standard methods and techniques for estimating water use at the county level, and outlines steps for preparing documentation for the United States, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands. As part of this USGS program to document water use on a national scale for the year 2005, estimates of water withdrawals for the categories of public supply, self-supplied domestic, industrial, irrigation, and thermoelectric power at the county level are prepared for each State using the guidelines in this report. Estimates of water withdrawals for aquaculture, livestock, and mining are prepared for each State using a county-based national model, although study chiefs in each State have the option of producing independent county estimates of water withdrawals for these categories. Estimates of deliveries of water from public supplies for domestic use by county also will be prepared for each State for 2005. As a result, domestic water use can be determined for each State by combining self-supplied domestic withdrawals and publicly supplied domestic deliveries. Fresh ground-water and surfacewater estimates will be prepared for all categories of use; and saline ground-water and surface-water estimates by county will be prepared for the categories of public supply, industrial, and thermoelectric power. Power production for thermoelectric power will be compiled for 2005. If data are available, reclaimed wastewater use will be compiled for the industrial and irrigation categories. Optional water-use categories are commercial, hydroelectric power, and wastewater treatment. Optional data elements are public-supply deliveries to commercial, industrial, and thermoelectric-power users; consumptive use; irrigation conveyance loss; and number of facilities. Aggregation of water-use data by eight-digit hydrologic cataloging unit and by principal aquifer also is optional. Water-use data compiled by the States will be stored in the USGS Aggregate Water-Use Data System (AWUDS). This database is a comprehensive aggregated database designed to store both mandatory and optional data elements. AWUDS contains several routines that can be used for quality assurance and quality control of the data, and produces tables of wateruse data compiled for 1985, 1990, 1995, and 2000.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, Yu; Scheeres, D. J.; Busch, Michael W.

    The 4.5 km long near-Earth asteroid 4179 Toutatis has made close Earth flybys approximately every four years between 1992 and 2012, and has been observed with high-resolution radar imaging during each approach. Its most recent Earth flyby in 2012 December was observed extensively at the Goldstone and Very Large Array radar telescopes. In this paper, Toutatis' spin state dynamics are estimated from observations of five flybys between 1992 and 2008. Observations were used to fit Toutatis' spin state dynamics in a least-squares sense, with the solar and terrestrial tidal torques incorporated in the dynamical model. The estimated parameters are Toutatis'more » Euler angles, angular velocity, moments of inertia, and the center-of-mass-center-of-figure offset. The spin state dynamics as well as the uncertainties of the Euler angles and angular velocity of the converged solution are then propagated to 2012 December in order to compare the dynamical model to the most recent Toutatis observations. The same technique of rotational dynamics estimation can be applied to any other tumbling body, given sufficiently accurate observations.« less

  16. Improving the clinical assessment of consciousness with advances in electrophysiological and neuroimaging techniques

    PubMed Central

    2010-01-01

    In clinical neurology, a comprehensive understanding of consciousness has been regarded as an abstract concept - best left to philosophers. However, times are changing and the need to clinically assess consciousness is increasingly becoming a real-world, practical challenge. Current methods for evaluating altered levels of consciousness are highly reliant on either behavioural measures or anatomical imaging. While these methods have some utility, estimates of misdiagnosis are worrisome (as high as 43%) - clearly this is a major clinical problem. The solution must involve objective, physiologically based measures that do not rely on behaviour. This paper reviews recent advances in physiologically based measures that enable better evaluation of consciousness states (coma, vegetative state, minimally conscious state, and locked in syndrome). Based on the evidence to-date, electroencephalographic and neuroimaging based assessments of consciousness provide valuable information for evaluation of residual function, formation of differential diagnoses, and estimation of prognosis. PMID:20113490

  17. Vision-based control for flight relative to dynamic environments

    NASA Astrophysics Data System (ADS)

    Causey, Ryan Scott

    The concept of autonomous systems has been considered an enabling technology for a diverse group of military and civilian applications. The current direction for autonomous systems is increased capabilities through more advanced systems that are useful for missions that require autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission capability, passive sensors, such as cameras, and complex software are added to the vehicle. By incorporating an on-board camera, visual information can be processed to interpret the surroundings. This information allows decision making with increased situational awareness without the cost of a sensor signature, which is critical in military applications. The concepts presented in this dissertation facilitate the issues inherent to vision-based state estimation of moving objects for a monocular camera configuration. The process consists of several stages involving image processing such as detection, estimation, and modeling. The detection algorithm segments the motion field through a least-squares approach and classifies motions not obeying the dominant trend as independently moving objects. An approach to state estimation of moving targets is derived using a homography approach. The algorithm requires knowledge of the camera motion, a reference motion, and additional feature point geometry for both the target and reference objects. The target state estimates are then observed over time to model the dynamics using a probabilistic technique. The effects of uncertainty on state estimation due to camera calibration are considered through a bounded deterministic approach. The system framework focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states to image plane quantities. Control designs using standard guidance and navigation schemes are then applied to the tracking and homing problems using the derived state estimation. Four simulations are implemented in MATLAB that build on the image concepts present in this dissertation. The first two simulations deal with feature point computations and the effects of uncertainty. The third simulation demonstrates the open-loop estimation of a target ground vehicle in pursuit whereas the four implements a homing control design for the Autonomous Aerial Refueling (AAR) using target estimates as feedback.

  18. Steady State Fluorescence Spectroscopy for Medical Diagnosis

    NASA Astrophysics Data System (ADS)

    Mahadevan-Jansen, Anita; Gebhart, Steven C.

    Light can react with tissue in different ways and provide information for identifying the physiological state of tissue or detecting the presence of disease. The light used to probe tissue does so in a non-intrusive manner and typically uses very low levels of light far below the requirements for therapeutic applications. The use of fiber optics simplifies the delivery and collection of this light in a minimally invasive manner. Since tissue response is virtually instantaneous, the results are obtained in real-time and the use of data processing techniques and multi-variate statistical analysis allows for automated detection and therefore provides an objective estimation of the tissue state. These then form the fundamental basis for the application of optical techniques for the detection of tissue physiology as well as pathology. These distinct advantages have encouraged many researchers to pursue the development of the different optical interactions for biological and medical detection.

  19. Optimal parameter estimation with a fixed rate of abstention

    NASA Astrophysics Data System (ADS)

    Gendra, B.; Ronco-Bonvehi, E.; Calsamiglia, J.; Muñoz-Tapia, R.; Bagan, E.

    2013-07-01

    The problems of optimally estimating a phase, a direction, and the orientation of a Cartesian frame (or trihedron) with general pure states are addressed. Special emphasis is put on estimation schemes that allow for inconclusive answers or abstention. It is shown that such schemes enable drastic improvements, up to the extent of attaining the Heisenberg limit in some cases, and the required amount of abstention is quantified. A general mathematical framework to deal with the asymptotic limit of many qubits or large angular momentum is introduced and used to obtain analytical results for all the relevant cases under consideration. Parameter estimation with abstention is also formulated as a semidefinite programming problem, for which very efficient numerical optimization techniques exist.

  20. Investigating Downscaling Methods and Evaluating Climate Models for Use in Estimating Regional Water Resources in Mountainous Regions under Changing Climatic Conditions

    NASA Technical Reports Server (NTRS)

    Frei, Allan; Nolin, Anne W.; Serreze, Mark C.; Armstrong, Richard L.; McGinnis, David L.; Robinson, David A.

    2004-01-01

    The purpose of this three-year study is to develop and evaluate techniques to estimate the range of potential hydrological impacts of climate change in mountainous areas. Three main objectives are set out in the proposal. (1) To develop and evaluate transfer functions to link tropospheric circulation to regional snowfall. (2) To evaluate a suite of General Circulation Models (GCMs) for use in estimating synoptic scale circulation and the resultant regional snowfall. And (3) to estimate the range of potential hydrological impacts of changing climate in the two case study areas: the Upper Colorado River basin, and the Catskill Mountains of southeastern New York State. Both regions provide water to large populations.

  1. Regional Distribution of Forest Height and Biomass from Multisensor Data Fusion

    NASA Technical Reports Server (NTRS)

    Yu, Yifan; Saatchi, Sassan; Heath, Linda S.; LaPoint, Elizabeth; Myneni, Ranga; Knyazikhin, Yuri

    2010-01-01

    Elevation data acquired from radar interferometry at C-band from SRTM are used in data fusion techniques to estimate regional scale forest height and aboveground live biomass (AGLB) over the state of Maine. Two fusion techniques have been developed to perform post-processing and parameter estimations from four data sets: 1 arc sec National Elevation Data (NED), SRTM derived elevation (30 m), Landsat Enhanced Thematic Mapper (ETM) bands (30 m), derived vegetation index (VI) and NLCD2001 land cover map. The first fusion algorithm corrects for missing or erroneous NED data using an iterative interpolation approach and produces distribution of scattering phase centers from SRTM-NED in three dominant forest types of evergreen conifers, deciduous, and mixed stands. The second fusion technique integrates the USDA Forest Service, Forest Inventory and Analysis (FIA) ground-based plot data to develop an algorithm to transform the scattering phase centers into mean forest height and aboveground biomass. Height estimates over evergreen (R2 = 0.86, P < 0.001; RMSE = 1.1 m) and mixed forests (R2 = 0.93, P < 0.001, RMSE = 0.8 m) produced the best results. Estimates over deciduous forests were less accurate because of the winter acquisition of SRTM data and loss of scattering phase center from tree ]surface interaction. We used two methods to estimate AGLB; algorithms based on direct estimation from the scattering phase center produced higher precision (R2 = 0.79, RMSE = 25 Mg/ha) than those estimated from forest height (R2 = 0.25, RMSE = 66 Mg/ha). We discuss sources of uncertainty and implications of the results in the context of mapping regional and continental scale forest biomass distribution.

  2. Unsupervised heart-rate estimation in wearables with Liquid states and a probabilistic readout.

    PubMed

    Das, Anup; Pradhapan, Paruthi; Groenendaal, Willemijn; Adiraju, Prathyusha; Rajan, Raj Thilak; Catthoor, Francky; Schaafsma, Siebren; Krichmar, Jeffrey L; Dutt, Nikil; Van Hoof, Chris

    2018-03-01

    Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine learning technique to estimate heart-rate from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery-life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects is considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Estimated use of water in the New England States, 1990

    USGS Publications Warehouse

    Korzendorfer, B.A.; Horn, M.A.

    1995-01-01

    Data on freshwater withdrawals in 1990 were compiled for the New England States. An estimated 4,160 Mgal/d (million gallons per day) of freshwater was withdrawn in 1990 in the six States. Of this total, 1,430 Mgal/d was withdrawn by public suppliers and delivered to users, and 2,720 Mgal/d was withdrawn by domestic, commercial, industrial, agricultural, mining, and thermoelectric power-generation users. More than 83 percent of the freshwater was from surface-water sources. Massachusetts, with the largest population, had the largest withdrawals of water. Data on saline water withdraw, and instream flow at hydroelectric plants were also compiled. An estimated 9, 170 Mgal/d of saline water was used for thermoelectric-power generation and industrial use in Connecticut, Maine, Massachusetts, New Hampshire, and Rhode Island. Return flow fro public wastewater-treatment plants totaled 1,750 Mgal/d; more than half (55 percent) of this return flow was in Massachusetts. In addition, about 178,000 Mgal/d was used for instream hydroelectric power generation; the largest users were Maine (about 83,000 Mgal/d) and New Hampshire (46,000 Mgal/d). These data, some of which were based on site-specific water-use information and some based on estimation techniques, were compiled through joint efforts by the U.S. Geological Survey and State cooperators for the 1990 national water-use compilation.

  4. Guidelines for preparation of State water-use estimates for 2015

    USGS Publications Warehouse

    Bradley, Michael W.

    2017-05-01

    The U.S. Geological Survey (USGS) has estimated the use of water in the United States at 5-year intervals since 1950. This report describes the water-use categories and data elements used for the national water-use compilation conducted as part of the USGS National Water-Use Science Project. The report identifies sources of water-use information, provides standard methods and techniques for estimating water use at the county level, and outlines steps for preparing documentation for the United States, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands.As part of this USGS program to document water use on a national scale, estimates of water withdrawals for the categories of public supply, self-supplied domestic, industrial, irrigation, and thermoelectric power are prepared for each county in each State, District, or territory by using the guidelines in this report. County estimates of water withdrawals for aquaculture, livestock, and mining are prepared for each State by using a county-based national model, although water-use programs in each State or Water Science Center have the option of producing independent county estimates of water withdrawals for these categories. Estimates of water withdrawals and consumptive use for thermoelectric power will be aggregated to the county level for each State by the national project; additionally, irrigation consumptive use at the county level will also be provided, although study chiefs in each State have the option of producing independent county estimates of water withdrawals and consumptive use for these categories.Estimates of deliveries of water from public supplies for domestic use by county also will be prepared for each State. As a result, total domestic water use can be determined for each State by combining self-supplied domestic withdrawals and public-supplied domestic deliveries. Fresh groundwater and surface-water estimates will be prepared for all categories of use, and saline groundwater and surface-water estimates by county will be prepared for the categories of public supply, industrial, mining, and thermoelectric power. Power production for thermoelectric power and irrigated acres by irrigation system type will be compiled. If data are available, reclaimed-wastewater use will be compiled for the public-supply, industrial, mining, thermoelectric-power, and irrigation categories.Optional water-use categories are commercial, hydroelectric power, and wastewater treatment. Optional data elements are public-supply deliveries to commercial, industrial, and thermoelectric-power users; consumptive use (for categories other than thermoelectric power and irrigation); irrigation conveyance loss; and number of facilities. Aggregation of water-use data by stream basin (eight-digit hydrologic unit code) and principal aquifers also is optional.Water-use data compiled by the States will be stored in the USGS Aggregate Water-Use Data System (AWUDS). This database is a comprehensive aggregated database designed to store mandatory and optional data elements. AWUDS contains several routines that can be used for quality assurance and quality control of the data, and AWUDS produces tables of water-use data from the previous compilations.

  5. Validation and application of Acoustic Mapping Velocimetry

    NASA Astrophysics Data System (ADS)

    Baranya, Sandor; Muste, Marian

    2016-04-01

    The goal of this paper is to introduce a novel methodology to estimate bedload transport in rivers based on an improved bedform tracking procedure. The measurement technique combines components and processing protocols from two contemporary nonintrusive instruments: acoustic and image-based. The bedform mapping is conducted with acoustic surveys while the estimation of the velocity of the bedforms is obtained with processing techniques pertaining to image-based velocimetry. The technique is therefore called Acoustic Mapping Velocimetry (AMV). The implementation of this technique produces a whole-field velocity map associated with the multi-directional bedform movement. Based on the calculated two-dimensional bedform migration velocity field, the bedload transport estimation is done using the Exner equation. A proof-of-concept experiment was performed to validate the AMV based bedload estimation in a laboratory flume at IIHR-Hydroscience & Engineering (IIHR). The bedform migration was analysed at three different flow discharges. Repeated bed geometry mapping, using a multiple transducer array (MTA), provided acoustic maps, which were post-processed with a particle image velocimetry (PIV) method. Bedload transport rates were calculated along longitudinal sections using the streamwise components of the bedform velocity vectors and the measured bedform heights. The bulk transport rates were compared with the results from concurrent direct physical samplings and acceptable agreement was found. As a first field implementation of the AMV an attempt was made to estimate bedload transport for a section of the Ohio river in the United States, where bed geometry maps, resulted by repeated multibeam echo sounder (MBES) surveys, served as input data. Cross-sectional distributions of bedload transport rates from the AMV based method were compared with the ones obtained from another non-intrusive technique (due to the lack of direct samplings), ISSDOTv2, developed by the US Army Corps of Engineers. The good agreement between the results from the two different methods is encouraging and suggests further field tests in varying hydro-morphological situations.

  6. Phosphate rock resources of the United States

    USGS Publications Warehouse

    Cathcart, James Bachelder; Sheldon, Richard Porter; Gulbrandsen, Robert A.

    1984-01-01

    In 1980, the United States produced about 54 million tons of phosphate rock, or about 40 percent of the world's production, of which a substantial amount was exported, both as phosphate rock and as chemical fertilizer. During the last decade, predictions have been made that easily ruinable, low-cost reserves of phosphate rock would be exhausted, and that by the end of this century, instead of being a major exporter of phosphate rock, the United States might become a net importer. Most analysts today, however, think that exports will indeed decline in the next one or two decades, but that resources of phosphate are sufficient to supply domestic needs for a long time into the future. What will happen in the future depends on the actual availability of low-cost phosphate rock reserves in the United States and in the world. A realistic understanding of future phosphate rock reserves is dependent on an accurate assessment, now, of national phosphate rock resources. Many different estimates of resources exist; none of them alike. The detailed analysis of past resource estimates presented in this report indicates that the estimates differ more in what is being estimated than in how much is thought to exist. The phosphate rock resource classification used herein is based on the two fundamental aspects of a mineral resource(l) the degree of certainty of existence and (2) the feasibility of economic recovery. The comparison of past estimates (including all available company data), combined with the writers' personal knowledge, indicates that 17 billion metric tons of identified, recoverable phosphate rock exist in the United States, of which about 7 billion metric tons are thought to be economic or marginally economic. The remaining 10 billion metric tons, mostly in the Northwestern phosphate district of Idaho, are considered to be subeconomic, ruinable when some increase in the price of phosphate occurs. More than 16 billion metric tons probably exist in the southeastern Coastal Plain phosphate province, principally in Florida and North Carolina and offshore in the shallow Atlantic Ocean from North Carolina to southern Florida. This resource is considered to be hypothetical because it is based on geologic inference combined with sparse drilling data. Total resources of phosphate rock in the United States are sufficient to supply domestic demands for the foreseeable future, provided that drilling is done to confirm hypothetical resources and the chemistry of the deposits is determined. Mining and beneficiation techniques will have to be modified or improved, and new techniques will have to be developed so that these deposits can be profitably exploited.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Back, H. O.; Bottenus, D. R.; Clayton, C.

    The next generation of 136Xe neutrinoless double beta decay experiments will require on the order of 5 tons of enriched 136Xe. By estimating the relative volatilities of the xenon isotopes and using standard chemical engineering techniques we explore the feasibility of using cryogenic distillation to produce 5 tons of 80% enriched 136Xe in 5-6 years. With current state-of-the-art distillation column packing materials we can estimate the total height of a traditional cryogenic distillation column. We also, report on how Micro Channel Distillation may reduce the overall size of a distillation system for 136Xe production.

  8. A modified NARMAX model-based self-tuner with fault tolerance for unknown nonlinear stochastic hybrid systems with an input-output direct feed-through term.

    PubMed

    Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W

    2014-01-01

    A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Comparison of Sequential and Variational Data Assimilation

    NASA Astrophysics Data System (ADS)

    Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht

    2017-04-01

    Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.

  10. NOAA Atlas 14: Updated Precipitation Frequency Estimates for the United States

    NASA Astrophysics Data System (ADS)

    Pavlovic, S.; Perica, S.; Martin, D.; Roy, I.; StLaurent, M.; Trypaluk, C.; Unruh, D.; Yekta, M.; Bonnin, G. M.

    2013-12-01

    NOAA Atlas 14 precipitation frequency estimates, developed by the National Weather Service's Hydrometeorological Design Studies Center, serve as the de-facto standards for a wide variety of design and planning activities under federal, state, and local regulations. Precipitation frequency estimates are used in the design of drainage for highways, culverts, bridges, parking lots, as well as in sizing sewer and stormwater infrastructure. Water resources engineers use them to estimate the amount of runoff, to estimate the volume of detention basins and size detention-basin outlet structures, and to estimate the volume of sediment or the amount of erosion. They are also used by floodplain managers to delineate floodplains and regulate the development in floodplains, which is crucial for all communities in the National Flood Insurance Program. Hydrometeorological Design Studies Center now provides more than 35,000 downloads per month to its Precipitation Frequency Data Server. Precipitation frequency estimates are often used in engineering design without any understanding how these estimates have been developed or without any understanding of the uncertainties associated with these estimates. This presentation will describe novel tools and techniques that have being developed in the last years to determine precipitation frequency estimates in NOAA Atlas 14. Particular attention will be given to the regional frequency analysis approach based on L-moment statistics calculated from annual maximum series, selected statistics obtained in determining and parameterizing the probability distribution functions, and the potential implication for engineering design of recently published estimates.

  11. NOAA Atlas 14: Updated Precipitation Frequency Estimates for the United States

    NASA Astrophysics Data System (ADS)

    Pavlovic, S.; Perica, S.; Martin, D.; Roy, I.; StLaurent, M.; Trypaluk, C.; Unruh, D.; Yekta, M.; Bonnin, G. M.

    2011-12-01

    NOAA Atlas 14 precipitation frequency estimates, developed by the National Weather Service's Hydrometeorological Design Studies Center, serve as the de-facto standards for a wide variety of design and planning activities under federal, state, and local regulations. Precipitation frequency estimates are used in the design of drainage for highways, culverts, bridges, parking lots, as well as in sizing sewer and stormwater infrastructure. Water resources engineers use them to estimate the amount of runoff, to estimate the volume of detention basins and size detention-basin outlet structures, and to estimate the volume of sediment or the amount of erosion. They are also used by floodplain managers to delineate floodplains and regulate the development in floodplains, which is crucial for all communities in the National Flood Insurance Program. Hydrometeorological Design Studies Center now provides more than 35,000 downloads per month to its Precipitation Frequency Data Server. Precipitation frequency estimates are often used in engineering design without any understanding how these estimates have been developed or without any understanding of the uncertainties associated with these estimates. This presentation will describe novel tools and techniques that have being developed in the last years to determine precipitation frequency estimates in NOAA Atlas 14. Particular attention will be given to the regional frequency analysis approach based on L-moment statistics calculated from annual maximum series, selected statistics obtained in determining and parameterizing the probability distribution functions, and the potential implication for engineering design of recently published estimates.

  12. Estimating the Population Sizes of Men Who Have Sex With Men in US States and Counties Using Data From the American Community Survey

    PubMed Central

    Bernstein, Kyle T; Sullivan, Patrick S; Purcell, David W; Chesson, Harrell W; Gift, Thomas L; Rosenberg, Eli S

    2016-01-01

    Background In the United States, male-to-male sexual transmission accounts for the greatest number of new human immunodeficiency virus (HIV) diagnoses and a substantial number of sexually transmitted infections (STI) annually. However, the prevalence and annual incidence of HIV and other STIs among men who have sex with men (MSM) cannot be estimated in local contexts because demographic data on sexual behavior, particularly same-sex behavior, are not routinely collected by large-scale surveys that allow analysis at state, county, or finer levels, such as the US decennial census or the American Community Survey (ACS). Therefore, techniques for indirectly estimating population sizes of MSM are necessary to supply denominators for rates at various geographic levels. Objective Our objectives were to indirectly estimate MSM population sizes at the county level to incorporate recent data estimates and to aggregate county-level estimates to states and core-based statistical areas (CBSAs). Methods We used data from the ACS to calculate a weight for each county in the United States based on its relative proportion of households that were headed by a male who lived with a male partner, compared with the overall proportion among counties at the same level of urbanicity (ie, large central metropolitan county, large fringe metropolitan county, medium/small metropolitan county, or nonmetropolitan county). We then used this weight to adjust the urbanicity-stratified percentage of adult men who had sex with a man in the past year, according to estimates derived from the National Health and Nutrition Examination Survey (NHANES), for each county. We multiplied the weighted percentages by the number of adult men in each county to estimate its number of MSM, summing county-level estimates to create state- and CBSA-level estimates. Finally, we scaled our estimated MSM population sizes to a meta-analytic estimate of the percentage of US MSM in the past 5 years (3.9%). Results We found that the percentage of MSM among adult men ranged from 1.5% (Wyoming) to 6.0% (Rhode Island) among states. Over one-quarter of MSM in the United States resided in 1 of 13 counties. Among counties with over 300,000 residents, the five highest county-level percentages of MSM were San Francisco County, California at 18.5% (66,586/359,566); New York County, New York at 13.8% (87,556/635,847); Denver County, Colorado at 10.5% (25,465/243,002); Multnomah County, Oregon at 9.9% (28,949/292,450); and Suffolk County, Massachusetts at 9.1% (26,338/289,634). Although California (n=792,750) and Los Angeles County (n=251,521) had the largest MSM populations of states and counties, respectively, the New York City-Newark-Jersey City CBSA had the most MSM of all CBSAs (n=397,399). Conclusions We used a new method to generate small-area estimates of MSM populations, incorporating prior work, recent data, and urbanicity-specific parameters. We also used an imputation approach to estimate MSM in rural areas, where same-sex sexual behavior may be underreported. Our approach yielded estimates of MSM population sizes within states, counties, and metropolitan areas in the United States, which provide denominators for calculation of HIV and STI prevalence and incidence at those geographic levels. PMID:27227149

  13. Estimating the Population Sizes of Men Who Have Sex With Men in US States and Counties Using Data From the American Community Survey.

    PubMed

    Grey, Jeremy A; Bernstein, Kyle T; Sullivan, Patrick S; Purcell, David W; Chesson, Harrell W; Gift, Thomas L; Rosenberg, Eli S

    2016-01-01

    In the United States, male-to-male sexual transmission accounts for the greatest number of new human immunodeficiency virus (HIV) diagnoses and a substantial number of sexually transmitted infections (STI) annually. However, the prevalence and annual incidence of HIV and other STIs among men who have sex with men (MSM) cannot be estimated in local contexts because demographic data on sexual behavior, particularly same-sex behavior, are not routinely collected by large-scale surveys that allow analysis at state, county, or finer levels, such as the US decennial census or the American Community Survey (ACS). Therefore, techniques for indirectly estimating population sizes of MSM are necessary to supply denominators for rates at various geographic levels. Our objectives were to indirectly estimate MSM population sizes at the county level to incorporate recent data estimates and to aggregate county-level estimates to states and core-based statistical areas (CBSAs). We used data from the ACS to calculate a weight for each county in the United States based on its relative proportion of households that were headed by a male who lived with a male partner, compared with the overall proportion among counties at the same level of urbanicity (ie, large central metropolitan county, large fringe metropolitan county, medium/small metropolitan county, or nonmetropolitan county). We then used this weight to adjust the urbanicity-stratified percentage of adult men who had sex with a man in the past year, according to estimates derived from the National Health and Nutrition Examination Survey (NHANES), for each county. We multiplied the weighted percentages by the number of adult men in each county to estimate its number of MSM, summing county-level estimates to create state- and CBSA-level estimates. Finally, we scaled our estimated MSM population sizes to a meta-analytic estimate of the percentage of US MSM in the past 5 years (3.9%). We found that the percentage of MSM among adult men ranged from 1.5% (Wyoming) to 6.0% (Rhode Island) among states. Over one-quarter of MSM in the United States resided in 1 of 13 counties. Among counties with over 300,000 residents, the five highest county-level percentages of MSM were San Francisco County, California at 18.5% (66,586/359,566); New York County, New York at 13.8% (87,556/635,847); Denver County, Colorado at 10.5% (25,465/243,002); Multnomah County, Oregon at 9.9% (28,949/292,450); and Suffolk County, Massachusetts at 9.1% (26,338/289,634). Although California (n=792,750) and Los Angeles County (n=251,521) had the largest MSM populations of states and counties, respectively, the New York City-Newark-Jersey City CBSA had the most MSM of all CBSAs (n=397,399). We used a new method to generate small-area estimates of MSM populations, incorporating prior work, recent data, and urbanicity-specific parameters. We also used an imputation approach to estimate MSM in rural areas, where same-sex sexual behavior may be underreported. Our approach yielded estimates of MSM population sizes within states, counties, and metropolitan areas in the United States, which provide denominators for calculation of HIV and STI prevalence and incidence at those geographic levels.

  14. Eight Stars of Gold--The Story of Alaska's Flag. Primary Grade Activities.

    ERIC Educational Resources Information Center

    Alaska State Museum, Juneau.

    This activities booklet focuses on the story of Alaska's flag. The booklet is intended for teachers to use with primary-grade children. Each activity in the booklet contains background information, a summary and time estimate, Alaska state standards, a step-by-step technique for implementing the activity, assessment tips, materials and resource…

  15. Operations Research techniques in the management of large-scale reforestation programs

    Treesearch

    Joseph Buongiorno; D.E. Teeguarden

    1978-01-01

    A reforestation planning system for the Douglas-fir region of the Western United States is described. Part of the system is a simulation model to predict plantation growth and to determine economic thinning regimes and rotation ages as a function of site characteristics, initial density, reforestation costs, and management constraints. A second model estimates the...

  16. Impact of the Illinois Seat Belt Use Law on Accidents, Deaths, and Injuries.

    ERIC Educational Resources Information Center

    Rock, Steven M.

    1992-01-01

    The impact of the 1985 Illinois seat belt law is explored using Box-Jenkins Auto-Regressive, Integrated Moving Averages (ARIMA) techniques and monthly accident statistical data from the state department of transportation for January-July 1990. A conservative estimate is that the law provides benefits of $15 million per month in Illinois. (SLD)

  17. Eight Stars of Gold--The Story of Alaska's Flag. Intermediate Activities (Grades 3-5).

    ERIC Educational Resources Information Center

    Alaska State Museum, Juneau.

    This activities booklet focuses on the story of Alaska's flag. The booklet is intended for teachers to use with students in the intermediate grades. Each activity in the booklet contains: background information, a summary and time estimate, state standards, a step-by-step technique for implementation of the activity, assessment tips, materials and…

  18. Eight Stars of Gold--The Story of Alaska's Flag. High School Activities (Grades 9-12).

    ERIC Educational Resources Information Center

    Alaska State Museum, Juneau.

    This activities booklet focuses on the story of Alaska's flag. The booklet is intended for use in teaching high school students. Each activity contains: background information; a summary and time estimate, Alaska state standards, a step-by-step technique for classroom implementation of the activity, assessment tips, materials and resources needed,…

  19. Estimating Am-241 activity in the body: comparison of direct measurements and radiochemical analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lynch, Timothy P.; Tolmachev, Sergei Y.; James, Anthony C.

    2009-06-01

    The assessment of dose and ultimately the health risk from intakes of radioactive materials begins with estimating the amount actually taken into the body. An accurate estimate provides the basis to best assess the distribution in the body, the resulting dose, and ultimately the health risk. This study continues the time-honored practice of evaluating the accuracy of results obtained using in vivo measurement methods and techniques. Results from the radiochemical analyses of the 241Am activity content of tissues and organs from four donors to the United States Transuranium and Uranium Registries were compared to the results from direct measurements ofmore » radioactive material in the body performed in vivo and post mortem. Two were whole body donations and two were partial body donations The skeleton was the organ with the highest deposition of 241Am activity in all four cases. The activities ranged from 30 Bq to 300 Bq. The skeletal estimates obtained from measurements over the forehead were within 20% of the radiochemistry results in three cases and differed by 78% in one case. The 241Am lung activity estimates ranged from 1 Bq to 30 Bq in the four cases. The results from the direct measurements were within 40% of the radiochemistry results in 3 cases and within a factor of 3 for the other case. The direct measurement estimates of liver activity ranged from 2 Bq to 60 Bq and were generally lower than the radiochemistry results. The results from this study suggest that the measurement methods and calibration techniques used at the In Vivo Radiobioassay and Research Facility to quantify the activity in the lungs, skeleton and liver are reasonable under the most challenging conditions where there is 241Am activity in multiple organs. These methods and techniques are comparable to those used at other Department of Energy sites. This suggests that the current in vivo methods and calibration techniques provide reasonable estimates of radioactive material in the body. Not unexpectedly, there can be significant uncertainty in the estimates especially when activity is also present in other organs.« less

  20. Technique for simulating peak-flow hydrographs in Maryland

    USGS Publications Warehouse

    Dillow, Jonathan J.A.

    1998-01-01

    The efficient design and management of many bridges, culverts, embankments, and flood-protection structures may require the estimation of time-of-inundation and (or) storage of floodwater relating to such structures. These estimates can be made on the basis of information derived from the peak-flow hydrograph. Average peak-flow hydrographs corresponding to a peak discharge of specific recurrence interval can be simulated for drainage basins having drainage areas less than 500 square miles in Maryland, using a direct technique of known accuracy. The technique uses dimensionless hydrographs in conjunction with estimates of basin lagtime and instantaneous peak flow. Ordinary least-squares regression analysis was used to develop an equation for estimating basin lagtime in Maryland. Drainage area, main channel slope, forest cover, and impervious area were determined to be the significant explanatory variables necessary to estimate average basin lagtime at the 95-percent confidence interval. Qualitative variables included in the equation adequately correct for geographic bias across the State. The average standard error of prediction associated with the equation is approximated as plus or minus (+/-) 37.6 percent. Volume correction factors may be applied to the basin lagtime on the basis of a comparison between actual and estimated hydrograph volumes prior to hydrograph simulation. Three dimensionless hydrographs were developed and tested using data collected during 278 significant rainfall-runoff events at 81 stream-gaging stations distributed throughout Maryland and Delaware. The data represent a range of drainage area sizes and basin conditions. The technique was verified by applying it to the simulation of 20 peak-flow events and comparing actual and simulated hydrograph widths at 50 and 75 percent of the observed peak-flow levels. The events chosen are considered extreme in that the average recurrence interval of the selected peak flows is 130 years. The average standard errors of prediction were +/- 61 and +/- 56 percent at the 50 and 75 percent of peak-flow hydrograph widths, respectively.

  1. Paleohydrologic techniques used to define the spatial occurrence of floods

    USGS Publications Warehouse

    Jarrett, R.D.

    1990-01-01

    Defining the cause and spatial characteristics of floods may be difficult because of limited streamflow and precipitation data. New paleohydrologic techniques that incorporate information from geomorphic, sedimentologic, and botanic studies provide important supplemental information to define homogeneous hydrologic regions. These techniques also help to define the spatial structure of rainstorms and floods and improve regional flood-frequency estimates. The occurrence and the non-occurrence of paleohydrologic evidence of floods, such as flood bars, alluvial fans, and tree scars, provide valuable hydrologic information. The paleohydrologic research to define the spatial characteristics of floods improves the understanding of flood hydrometeorology. This research was used to define the areal extent and contributing drainage area of flash floods in Colorado. Also, paleohydrologic evidence was used to define the spatial boundaries for the Colorado foothills region in terms of the meteorologic cause of flooding and elevation. In general, above 2300 m, peak flows are caused by snowmelt. Below 2300 m, peak flows primarily are caused by rainfall. The foothills region has an upper elevation limit of about 2300 m and a lower elevation limit of about 1500 m. Regional flood-frequency estimates that incorporate the paleohydrologic information indicate that the Big Thompson River flash flood of 1976 had a recurrence interval of approximately 10,000 years. This contrasts markedly with 100 to 300 years determined by using conventional hydrologic analyses. Flood-discharge estimates based on rainfall-runoff methods in the foothills of Colorado result in larger values than those estimated with regional flood-frequency relations, which are based on long-term streamflow data. Preliminary hydrologic and paleohydrologic research indicates that intense rainfall does not occur at higher elevations in other Rocky Mountain states and that the highest elevations for rainfall-producing floods vary by latitude. The study results have implications for floodplain management and design of hydraulic structures in the mountains of Colorado and other Rocky Mountain States. ?? 1990.

  2. A global atlas of GEOS-3 significant waveheight data and comparison of the data with national buoy data

    NASA Technical Reports Server (NTRS)

    Mcmillan, J. D.

    1981-01-01

    The accuracy of the GEOS-3 significant waveheight estimates compared with buoy measurements of significant waveheight were determined. A global atlas of the GEOS-3 significant waveheight estimates gathered is presented. The GEOS-3 significant waveheight estimation algorithm is derived by analyzing the return waveform characteristics of the altimeter. Convergence considerations are examined, the rationale for a smoothing technique is presented and the convergence characteristics of the smoothed estimate are discussed. The GEOS-3 data are selected for comparison with buoy measurements. The GEOS-3 significant waveheight estimates are assembled in the form of a global atlas of contour maps. Both high and low sea state contour maps are presented, and the data are displayed both by seasons and for the entire duration of the GEOS-3 mission.

  3. Flight data identification of six degree-of-freedom stability and control derivatives of a large crane type helicopter

    NASA Technical Reports Server (NTRS)

    Tomaine, R. L.

    1976-01-01

    Flight test data from a large 'crane' type helicopter were collected and processed for the purpose of identifying vehicle rigid body stability and control derivatives. The process consisted of using digital and Kalman filtering techniques for state estimation and Extended Kalman filtering for parameter identification, utilizing a least squares algorithm for initial derivative and variance estimates. Data were processed for indicated airspeeds from 0 m/sec to 152 m/sec. Pulse, doublet and step control inputs were investigated. Digital filter frequency did not have a major effect on the identification process, while the initial derivative estimates and the estimated variances had an appreciable effect on many derivative estimates. The major derivatives identified agreed fairly well with analytical predictions and engineering experience. Doublet control inputs provided better results than pulse or step inputs.

  4. Noise-induced errors in geophysical parameter estimation from retarding potential analyzers in low Earth orbit

    NASA Astrophysics Data System (ADS)

    Debchoudhury, Shantanab; Earle, Gregory

    2017-04-01

    Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.

  5. Precision estimate for Odin-OSIRIS limb scatter retrievals

    NASA Astrophysics Data System (ADS)

    Bourassa, A. E.; McLinden, C. A.; Bathgate, A. F.; Elash, B. J.; Degenstein, D. A.

    2012-02-01

    The limb scatter measurements made by the Optical Spectrograph and Infrared Imaging System (OSIRIS) instrument on the Odin spacecraft are used to routinely produce vertically resolved trace gas and aerosol extinction profiles. Version 5 of the ozone and stratospheric aerosol extinction retrievals, which are available for download, are performed using a multiplicative algebraic reconstruction technique (MART). The MART inversion is a type of relaxation method, and as such the covariance of the retrieved state is estimated numerically, which, if done directly, is a computationally heavy task. Here we provide a methodology for the derivation of a numerical estimate of the covariance matrix for the retrieved state using the MART inversion that is sufficiently efficient to perform for each OSIRIS measurement. The resulting precision is compared with the variability in a large set of pairs of OSIRIS measurements that are close in time and space in the tropical stratosphere where the natural atmospheric variability is weak. These results are found to be highly consistent and thus provide confidence in the numerical estimate of the precision in the retrieved profiles.

  6. MODIS Data Assimilation in the CROPGRO model for improving soybean yield estimations

    NASA Astrophysics Data System (ADS)

    Richetti, J.; Monsivais-Huertero, A.; Ahmad, I.; Judge, J.

    2017-12-01

    Soybean is one of the main agricultural commodities in the world. Thus, having better estimates of its agricultural production is important. Improving the soybean crop models in Brazil is crucial for better understanding of the soybean market and enhancing decision making, because Brazil is the second largest soybean producer in the world, Parana state is responsible for almost 20% of it, and by itself would be the fourth greatest soybean producer in the world. Data assimilation techniques provide a method to improve spatio-temporal continuity of crops through integration of remotely sensed observations and crop growth models. This study aims to use MODIS EVI to improve DSSAT-CROPGRO soybean yield estimations in the Parana state, southern Brazil. The method uses the Ensemble Kalman filter which assimilates MODIS Terra and Aqua combined products (MOD13Q1 and MYD13Q1) into the CROPGRO model to improve the agricultural production estimates through update of light interception data over time. Expected results will be validated with monitored commercial farms during the period of 2013-2014.

  7. A Bayesian Machine Learning Model for Estimating Building Occupancy from Open Source Data

    DOE PAGES

    Stewart, Robert N.; Urban, Marie L.; Duchscherer, Samantha E.; ...

    2016-01-01

    Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the artmore » by introducing the Population Data Tables (PDT), a Bayesian based informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000ft2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the art by introducing the Population Data Tables (PDT), a Bayesian model and informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000 ft 2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.« less

  8. Arterial Spin Labeling - Fast Imaging with Steady-State Free Precession (ASL-FISP): A Rapid and Quantitative Perfusion Technique for High Field MRI

    PubMed Central

    Gao, Ying; Goodnough, Candida L.; Erokwu, Bernadette O.; Farr, George W.; Darrah, Rebecca; Lu, Lan; Dell, Katherine M.; Yu, Xin; Flask, Chris A.

    2014-01-01

    Arterial Spin Labeling (ASL) is a valuable non-contrast perfusion MRI technique with numerous clinical applications. Many previous ASL MRI studies have utilized either Echo-Planar Imaging (EPI) or True Fast Imaging with Steady-State Free Precession (True FISP) readouts that are prone to off-resonance artifacts on high field MRI scanners. We have developed a rapid ASL-FISP MRI acquisition for high field preclinical MRI scanners providing perfusion-weighted images with little or no artifacts in less than 2 seconds. In this initial implementation, a FAIR (Flow-Sensitive Alternating Inversion Recovery) ASL preparation was combined with a rapid, centrically-encoded FISP readout. Validation studies on healthy C57/BL6 mice provided consistent estimation of in vivo mouse brain perfusion at 7 T and 9.4 T (249±38 ml/min/100g and 241±17 ml/min/100g, respectively). The utility of this method was further demonstrated in detecting significant perfusion deficits in a C57/BL6 mouse model of ischemic stroke. Reasonable kidney perfusion estimates were also obtained for a healthy C57/BL6 mouse exhibiting differential perfusion in the renal cortex and medulla. Overall, the ASL-FISP technique provides a rapid and quantitative in vivo assessment of tissue perfusion for high field MRI scanners with minimal image artifacts. PMID:24891124

  9. Determining the Oxygen Fugacity of Lunar Pyroclastic Glasses Using Vanadium Valence - An Update

    NASA Technical Reports Server (NTRS)

    Karner, J. M.; Sutton, S. R.; Papike, J. J.; Shearer, C. K.; Jones, J. H.; Newville, M.

    2004-01-01

    We have been developing an oxygen barometer based on the valence state of V (V(2+), V(3+), V(4+), and V(5+)) in solar system basaltic glasses. The V valence is determined by synchrotron micro x-ray absorption near edge structure (XANES), which uses x-ray absorption associated with core-electronic transitions (absorption edges) to reveal a pre-edge peak whose intensity is directly proportional to the valence state of an element. XANES has advantages over other techniques that determine elemental valence because measurements can be made non-destructively in air and in situ on conventional thin sections at a micrometer spatial resolution with elemental sensitivities of approx. 100 ppm. Recent results show that fO2 values derived from the V valence technique are consistent with fO2 estimates determined by other techniques for materials that crystallized above the IW buffer. The fO2's determined by V valence (IW-3.8 to IW-2) for the lunar pyroclastic glasses, however, are on the order of 1 to 2.8 log units below previous estimates. Furthermore, the calculated fO2's decrease with increasing TiO2 contents from the A17 VLT to the A17 Orange glasses. In order to investigate these results further, we have synthesized lunar green and orange glasses and examined them by XANES.

  10. Minimized-Laplacian residual interpolation for color image demosaicking

    NASA Astrophysics Data System (ADS)

    Kiku, Daisuke; Monno, Yusuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2014-03-01

    A color difference interpolation technique is widely used for color image demosaicking. In this paper, we propose a minimized-laplacian residual interpolation (MLRI) as an alternative to the color difference interpolation, where the residuals are differences between observed and tentatively estimated pixel values. In the MLRI, we estimate the tentative pixel values by minimizing the Laplacian energies of the residuals. This residual image transfor- mation allows us to interpolate more easily than the standard color difference transformation. We incorporate the proposed MLRI into the gradient based threshold free (GBTF) algorithm, which is one of current state-of- the-art demosaicking algorithms. Experimental results demonstrate that our proposed demosaicking algorithm can outperform the state-of-the-art algorithms for the 30 images of the IMAX and the Kodak datasets.

  11. Analysis of mean time to data loss of fault-tolerant disk arrays RAID-6 based on specialized Markov chain

    NASA Astrophysics Data System (ADS)

    Rahman, P. A.; D'K Novikova Freyre Shavier, G.

    2018-03-01

    This scientific paper is devoted to the analysis of the mean time to data loss of redundant disk arrays RAID-6 with alternation of data considering different failure rates of disks both in normal state of the disk array and in degraded and rebuild states, and also nonzero time of the disk replacement. The reliability model developed by the authors on the basis of the Markov chain and obtained calculation formula for estimation of the mean time to data loss (MTTDL) of the RAID-6 disk arrays are also presented. At last, the technique of estimation of the initial reliability parameters and examples of calculation of the MTTDL of the RAID-6 disk arrays for the different numbers of disks are also given.

  12. Estimation of Metabolism Characteristics for Heat-Injured Bacteria Using Dielectrophoretic Impedance Measurement Method

    NASA Astrophysics Data System (ADS)

    Amako, Eri; Enjoji, Takaharu; Uchida, Satoshi; Tochikubo, Fumiyoshi

    Constant monitoring and immediate control of fermentation processes have been required for advanced quality preservation in food industry. In the present work, simple estimation of metabolic states for heat-injured Escherichia coli (E. coli) in a micro-cell was investigated using dielectrophoretic impedance measurement (DEPIM) method. Temporal change in the conductance between micro-gap (ΔG) was measured for various heat treatment temperatures. In addition, the dependence of enzyme activity, growth capacity and membrane situation for E. coli on heat treatment temperature was also analyzed with conventional biological methods. Consequently, a correlation between ΔG and those biological properties was obtained quantitatively. This result suggests that DEPIM method will be available for an effective monitoring technique for complex change in various biological states of microorganisms.

  13. A state space based approach to localizing single molecules from multi-emitter images.

    PubMed

    Vahid, Milad R; Chao, Jerry; Ward, E Sally; Ober, Raimund J

    2017-01-28

    Single molecule super-resolution microscopy is a powerful tool that enables imaging at sub-diffraction-limit resolution. In this technique, subsets of stochastically photoactivated fluorophores are imaged over a sequence of frames and accurately localized, and the estimated locations are used to construct a high-resolution image of the cellular structures labeled by the fluorophores. Available localization methods typically first determine the regions of the image that contain emitting fluorophores through a process referred to as detection. Then, the locations of the fluorophores are estimated accurately in an estimation step. We propose a novel localization method which combines the detection and estimation steps. The method models the given image as the frequency response of a multi-order system obtained with a balanced state space realization algorithm based on the singular value decomposition of a Hankel matrix, and determines the locations of intensity peaks in the image as the pole locations of the resulting system. The locations of the most significant peaks correspond to the locations of single molecules in the original image. Although the accuracy of the location estimates is reasonably good, we demonstrate that, by using the estimates as the initial conditions for a maximum likelihood estimator, refined estimates can be obtained that have a standard deviation close to the Cramér-Rao lower bound-based limit of accuracy. We validate our method using both simulated and experimental multi-emitter images.

  14. Estimating the number of male sex workers with the capture-recapture technique in Nigeria.

    PubMed

    Adebajo, Sylvia B; Eluwa, George I; Tocco, Jack U; Ahonsi, Babatunde A; Abiodun, Lolade Y; Anene, Oliver A; Akpona, Dennis O; Karlyn, Andrew S; Kellerman, Scott

    2013-12-01

    Estimating the size of populations most affected by HIV such as men who have sex with men (MSM) though crucial for structuring responses to the epidemic presents significant challenges, especially in a developing society. Using capture-recapture methodology, the size of MSM-SW in Nigeria was estimated in three major cities (Lagos, Kano and Port Harcourt) between July and December 2009. Following interviews with key informants, locations and times when MSM-SW were available to male clients were mapped and designated as "hotspots". Counts were conducted on two consecutive weekends. Population estimates were computed using a standardized Lincoln formula. Fifty-six hotspots were identified in Kano, 38 in Lagos and 42 in Port Harcourt. On a given weekend night, Port Harcourt had the largest estimated population of MSM sex workers, 723 (95% CI: 594-892) followed by Lagos state with 620 (95%CI: 517-724) and Kano state with 353 (95%CI: 332-373). This study documents a large population of MSM-SW in 3 Nigerian cities where higher HIV prevalence among MSM compared to the general population has been documented. Research and programming are needed to better understand and address the health vulnerabilities that MSM-SW and their clients face.

  15. A quick on-line state of health estimation method for Li-ion battery with incremental capacity curves processed by Gaussian filter

    NASA Astrophysics Data System (ADS)

    Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri

    2018-01-01

    This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).

  16. 80 and 100 Meter Wind Energy Resource Potential for the United States (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, D.; Schwartz, M.; Haymes, S.

    Accurate information about the wind potential in each state is required for federal and state policy initiatives that will expand the use of wind energy in the United States. The National Renewable Energy Laboratory (NREL) and AWS Truewind have collaborated to produce the first comprehensive new state-level assessment of wind resource potential since 1993. The estimates are based on high-resolution maps of predicted mean annual wind speeds for the contiguous 48 states developed by AWS Truewind. These maps, at spatial resolution of 200 meters and heights of 60 to 100 meters, were created with a mesoscale-microscale modeling technique and adjustedmore » to reduce errors through a bias-correction procedure involving data from more than 1,000 measurement masts. NREL used the capacity factor maps to estimate the wind energy potential capacity in megawatts for each state by capacity factor ranges. The purpose of this presentation is to (1) inform state and federal policy makers, regulators, developers, and other stakeholders on the availability of the new wind potential information that may influence development, (2) inform the audience of how the new information was derived, and (3) educate the audience on how the information should be interpreted in developing state and federal policy initiatives.« less

  17. Two biased estimation techniques in linear regression: Application to aircraft

    NASA Technical Reports Server (NTRS)

    Klein, Vladislav

    1988-01-01

    Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.

  18. Analysis of satellite altimeter signal characteristics and investigation of sea-truth data requirements

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Results are presented of analysis of satellite signal characteristics as influenced by ocean surface roughness and an investigation of sea truth data requirements. The first subject treated is that of postflight waveform reconstruction for the Skylab S-193 radar altimeter. Sea state estimation accuracies are derived based on analytical and hybrid computer simulation techniques. An analysis of near-normal incidence, microwave backscattering from the ocean's surface is accomplished in order to obtain the minimum sea truth data necessary for good agreement between theoretical and experimental scattering results. Sea state bias is examined from the point of view of designing an experiment which will lead to a resolution of the problem. A discussion is given of some deficiencies which were found in the theory underlying the Stilwell technique for spectral measurements.

  19. Topological and canonical kriging for design flood prediction in ungauged catchments: an improvement over a traditional regional regression approach?

    USGS Publications Warehouse

    Archfield, Stacey A.; Pugliese, Alessio; Castellarin, Attilio; Skøien, Jon O.; Kiang, Julie E.

    2013-01-01

    In the United States, estimation of flood frequency quantiles at ungauged locations has been largely based on regional regression techniques that relate measurable catchment descriptors to flood quantiles. More recently, spatial interpolation techniques of point data have been shown to be effective for predicting streamflow statistics (i.e., flood flows and low-flow indices) in ungauged catchments. Literature reports successful applications of two techniques, canonical kriging, CK (or physiographical-space-based interpolation, PSBI), and topological kriging, TK (or top-kriging). CK performs the spatial interpolation of the streamflow statistic of interest in the two-dimensional space of catchment descriptors. TK predicts the streamflow statistic along river networks taking both the catchment area and nested nature of catchments into account. It is of interest to understand how these spatial interpolation methods compare with generalized least squares (GLS) regression, one of the most common approaches to estimate flood quantiles at ungauged locations. By means of a leave-one-out cross-validation procedure, the performance of CK and TK was compared to GLS regression equations developed for the prediction of 10, 50, 100 and 500 yr floods for 61 streamgauges in the southeast United States. TK substantially outperforms GLS and CK for the study area, particularly for large catchments. The performance of TK over GLS highlights an important distinction between the treatments of spatial correlation when using regression-based or spatial interpolation methods to estimate flood quantiles at ungauged locations. The analysis also shows that coupling TK with CK slightly improves the performance of TK; however, the improvement is marginal when compared to the improvement in performance over GLS.

  20. Navigating complex sample analysis using national survey data.

    PubMed

    Saylor, Jennifer; Friedmann, Erika; Lee, Hyeon Joo

    2012-01-01

    The National Center for Health Statistics conducts the National Health and Nutrition Examination Survey and other national surveys with probability-based complex sample designs. Goals of national surveys are to provide valid data for the population of the United States. Analyses of data from population surveys present unique challenges in the research process but are valuable avenues to study the health of the United States population. The aim of this study was to demonstrate the importance of using complex data analysis techniques for data obtained with complex multistage sampling design and provide an example of analysis using the SPSS Complex Samples procedure. Illustration of challenges and solutions specific to secondary data analysis of national databases are described using the National Health and Nutrition Examination Survey as the exemplar. Oversampling of small or sensitive groups provides necessary estimates of variability within small groups. Use of weights without complex samples accurately estimates population means and frequency from the sample after accounting for over- or undersampling of specific groups. Weighting alone leads to inappropriate population estimates of variability, because they are computed as if the measures were from the entire population rather than a sample in the data set. The SPSS Complex Samples procedure allows inclusion of all sampling design elements, stratification, clusters, and weights. Use of national data sets allows use of extensive, expensive, and well-documented survey data for exploratory questions but limits analysis to those variables included in the data set. The large sample permits examination of multiple predictors and interactive relationships. Merging data files, availability of data in several waves of surveys, and complex sampling are techniques used to provide a representative sample but present unique challenges. In sophisticated data analysis techniques, use of these data is optimized.

  1. Techniques for estimating peak-streamflow frequency for unregulated streams and streams regulated by small floodwater retarding structures in Oklahoma

    USGS Publications Warehouse

    Tortorelli, Robert L.

    1997-01-01

    Statewide regression equations for Oklahoma were determined for estimating peak discharge and flood frequency for selected recurrence intervals from 2 to 500 years for ungaged sites on natural unregulated streams. The most significant independent variables required to estimate peak-streamflow frequency for natural unregulated streams in Oklahoma are contributing drainage area, main-channel slope, and mean-annual precipitation. The regression equations are applicable for watersheds with drainage areas less than 2,510 square miles that are not affected by regulation from manmade works. Limitations on the use of the regression relations and the reliability of regression estimates for natural unregulated streams are discussed. Log-Pearson Type III analysis information, basin and climatic characteristics, and the peak-stream-flow frequency estimates for 251 gaging stations in Oklahoma and adjacent states are listed. Techniques are presented to make a peak-streamflow frequency estimate for gaged sites on natural unregulated streams and to use this result to estimate a nearby ungaged site on the same stream. For ungaged sites on urban streams, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow frequency. For ungaged sites on streams regulated by small floodwater retarding structures, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow frequency. The statewide regression equations are adjusted by substituting the drainage area below the floodwater retarding structures, or drainage area that represents the percentage of the unregulated basin, in the contributing drainage area parameter to obtain peak-streamflow frequency estimates.

  2. Comparison of diffusion length measurements from the Flying Spot Technique and the photocarrier grating method in amorphous thin films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vieira, M.; Fantoni, A.; Martins, R.

    1994-12-31

    Using the Flying Spot Technique (FST) the authors have studied minority carrier transport parallel and perpendicular to the surface of amorphous silicon films (a-Si:H). To reduce slow transients due to charge redistribution in low resistivity regions during the measurement they have applied a strong homogeneously absorbed bias light. The defect density was estimated from Constant Photocurrent Method (CPM) measurements. The steady-state photocarrier grating technique (SSPG) is a 1-dimensional approach. However, the modulation depth of the carrier profile is also dependent on film surface properties, like surface recombination velocity. Both methods yield comparable diffusion lengths when applied to a-Si:H.

  3. Transition of planar Couette flow at infinite Reynolds numbers.

    PubMed

    Itano, Tomoaki; Akinaga, Takeshi; Generalis, Sotos C; Sugihara-Seki, Masako

    2013-11-01

    An outline of the state space of planar Couette flow at high Reynolds numbers (Re<10^{5}) is investigated via a variety of efficient numerical techniques. It is verified from nonlinear analysis that the lower branch of the hairpin vortex state (HVS) asymptotically approaches the primary (laminar) state with increasing Re. It is also predicted that the lower branch of the HVS at high Re belongs to the stability boundary that initiates a transition to turbulence, and that one of the unstable manifolds of the lower branch of HVS lies on the boundary. These facts suggest HVS may provide a criterion to estimate a minimum perturbation arising transition to turbulent states at the infinite Re limit.

  4. Thermodynamically constrained correction to ab initio equations of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    French, Martin; Mattsson, Thomas R.

    2014-07-07

    We show how equations of state generated by density functional theory methods can be augmented to match experimental data without distorting the correct behavior in the high- and low-density limits. The technique is thermodynamically consistent and relies on knowledge of the density and bulk modulus at a reference state and an estimation of the critical density of the liquid phase. We apply the method to four materials representing different classes of solids: carbon, molybdenum, lithium, and lithium fluoride. It is demonstrated that the corrected equations of state for both the liquid and solid phases show a significantly reduced dependence ofmore » the exchange-correlation functional used.« less

  5. Effect of retransmission and retrodiction on estimation and fusion in long-haul sensor networks

    DOE PAGES

    Liu, Qiang; Wang, Xin; Rao, Nageswara S. V.; ...

    2016-01-01

    In a long-haul sensor network, sensors are remotely deployed over a large geographical area to perform certain tasks, such as target tracking. In this work, we study the scenario where sensors take measurements of one or more dynamic targets and send state estimates of the targets to a fusion center via satellite links. The severe loss and delay inherent over the satellite channels reduce the number of estimates successfully arriving at the fusion center, thereby limiting the potential fusion gain and resulting in suboptimal accuracy performance of the fused estimates. In addition, the errors in target-sensor data association can alsomore » degrade the estimation performance. To mitigate the effect of imperfect communications on state estimation and fusion, we consider retransmission and retrodiction. The system adopts certain retransmission-based transport protocols so that lost messages can be recovered over time. Besides, retrodiction/smoothing techniques are applied so that the chances of incurring excess delay due to retransmission are greatly reduced. We analyze the extent to which retransmission and retrodiction can improve the performance of delay-sensitive target tracking tasks under variable communication loss and delay conditions. Lastly, simulation results of a ballistic target tracking application are shown in the end to demonstrate the validity of our analysis.« less

  6. Evaluation of alternative model-data fusion approaches in water balance estimation across Australia

    NASA Astrophysics Data System (ADS)

    van Dijk, A. I. J. M.; Renzullo, L. J.

    2009-04-01

    Australia's national agencies are developing a continental modelling system to provide a range of water information services. It will include rolling water balance estimation to underpin national water accounts, water resources assessments that interpret current water resources availability and trends in a historical context, and water resources predictions coupled to climate and weather forecasting. The nation-wide coverage, currency, accuracy, and consistency required means that remote sensing will need to play an important role along with in-situ observations. Different approaches to blending models and observations can be considered. Integration of on-ground and remote sensing data into land surface models in atmospheric applications often involves state updating through model-data assimilation techniques. By comparison, retrospective water balance estimation and hydrological scenario modelling to date has mostly relied on static parameter fitting against observations and has made little use of earth observation. The model-data fusion approach most appropriate for a continental water balance estimation system will need to consider the trade-off between computational overhead and the accuracy gains achieved when using more sophisticated synthesis techniques and additional observations. This trade-off was investigated using a landscape hydrological model and satellite-based estimates of soil moisture and vegetation properties for aseveral gauged test catchments in southeast Australia.

  7. Technical note: Alternatives to reduce adipose tissue sampling bias.

    PubMed

    Cruz, G D; Wang, Y; Fadel, J G

    2014-10-01

    Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.

  8. Methods for Estimating Withdrawal and Return Flow by Census Block for 2005 and 2020 for New Hampshire

    USGS Publications Warehouse

    Hayes, Laura; Horn, Marilee A.

    2009-01-01

    The U.S. Geological Survey, in cooperation with the New Hampshire Department of Environmental Services, estimated the amount of water demand, consumptive use, withdrawal, and return flow for each U.S. Census block in New Hampshire for the years 2005 (current) and 2020. Estimates of domestic, commercial, industrial, irrigation, and other nondomestic water use were derived through the use and innovative integration of several State and Federal databases, and by use of previously developed techniques. The New Hampshire Water Demand database was created as part of this study to store and integrate State of New Hampshire data central to the project. Within the New Hampshire Water Demand database, a lookup table was created to link the State databases and identify water users common to more than one database. The lookup table also allowed identification of withdrawal and return-flow locations of registered and unregistered commercial, industrial, agricultural, and other nondomestic users. Geographic information system data from the State were used in combination with U.S. Census Bureau spatial data to locate and quantify withdrawals and return flow for domestic users in each census block. Analyzing and processing the most recently available data resulted in census-block estimations of 2005 water use. Applying population projections developed by the State to the data sets enabled projection of water use for the year 2020. The results for each census block are stored in the New Hampshire Water Demand database and may be aggregated to larger political areas or watersheds to assess relative hydrologic stress on the basis of current and potential water availability.

  9. An online outlier identification and removal scheme for improving fault detection performance.

    PubMed

    Ferdowsi, Hasan; Jagannathan, Sarangapani; Zawodniok, Maciej

    2014-05-01

    Measured data or states for a nonlinear dynamic system is usually contaminated by outliers. Identifying and removing outliers will make the data (or system states) more trustworthy and reliable since outliers in the measured data (or states) can cause missed or false alarms during fault diagnosis. In addition, faults can make the system states nonstationary needing a novel analytical model-based fault detection (FD) framework. In this paper, an online outlier identification and removal (OIR) scheme is proposed for a nonlinear dynamic system. Since the dynamics of the system can experience unknown changes due to faults, traditional observer-based techniques cannot be used to remove the outliers. The OIR scheme uses a neural network (NN) to estimate the actual system states from measured system states involving outliers. With this method, the outlier detection is performed online at each time instant by finding the difference between the estimated and the measured states and comparing its median with its standard deviation over a moving time window. The NN weight update law in OIR is designed such that the detected outliers will have no effect on the state estimation, which is subsequently used for model-based fault diagnosis. In addition, since the OIR estimator cannot distinguish between the faulty or healthy operating conditions, a separate model-based observer is designed for fault diagnosis, which uses the OIR scheme as a preprocessing unit to improve the FD performance. The stability analysis of both OIR and fault diagnosis schemes are introduced. Finally, a three-tank benchmarking system and a simple linear system are used to verify the proposed scheme in simulations, and then the scheme is applied on an axial piston pump testbed. The scheme can be applied to nonlinear systems whose dynamics and underlying distribution of states are subjected to change due to both unknown faults and operating conditions.

  10. Application of isotope dilution technique in vitamin A nutrition.

    PubMed

    Wasantwisut, Emorn

    2002-09-01

    The isotope dilution technique involving deuterated retinol has been developed to quantitatively estimate total body reserves of vitamin A in humans. The technique provided good estimates in comparison to hepatic vitamin A concentrations in Bangladeshi surgical patients. Kinetic studies in the United States, Bangladesh, and Guatemala indicated the mean equilibration time of 17 to 20 days irrespective of the size of hepatic reserves. Due to the controversy surrounding the efficacy of a carotene-rich diet on improvement of vitamin A status, the isotope dilution technique was proposed to pursue this research question further (IAEA's coordinated research program). In the Philippines, schoolchildren with low serum retinol concentrations showed significant improvement in total body vitamin A stores following intake of carotene-rich foods (orange fruits and vegetables), using a three-day deuterated-retinol-dilution procedure. When Chinese kindergarten children were fed green and yellow vegetables during the winter, their total body vitamin A stores were sustained as compared to a steady decline of vitamin A stores in the control children. Likewise, daily consumption of purified beta-carotene or diet rich in provitamin A carotenoids were shown to prevent a loss in total body vitamin A stores among Thai lactating women during the rice-planting season. These studies demonstrate potentials of the isotope dilution technique to evaluate the impact of provitamin A carotenoid intervention programs.

  11. Thoracic respiratory motion estimation from MRI using a statistical model and a 2-D image navigator.

    PubMed

    King, A P; Buerger, C; Tsoumpas, C; Marsden, P K; Schaeffter, T

    2012-01-01

    Respiratory motion models have potential application for estimating and correcting the effects of motion in a wide range of applications, for example in PET-MR imaging. Given that motion cycles caused by breathing are only approximately repeatable, an important quality of such models is their ability to capture and estimate the intra- and inter-cycle variability of the motion. In this paper we propose and describe a technique for free-form nonrigid respiratory motion correction in the thorax. Our model is based on a principal component analysis of the motion states encountered during different breathing patterns, and is formed from motion estimates made from dynamic 3-D MRI data. We apply our model using a data-driven technique based on a 2-D MRI image navigator. Unlike most previously reported work in the literature, our approach is able to capture both intra- and inter-cycle motion variability. In addition, the 2-D image navigator can be used to estimate how applicable the current motion model is, and hence report when more imaging data is required to update the model. We also use the motion model to decide on the best positioning for the image navigator. We validate our approach using MRI data acquired from 10 volunteers and demonstrate improvements of up to 40.5% over other reported motion modelling approaches, which corresponds to 61% of the overall respiratory motion present. Finally we demonstrate one potential application of our technique: MRI-based motion correction of real-time PET data for simultaneous PET-MRI acquisition. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Real-Time Tracking of Selective Auditory Attention From M/EEG: A Bayesian Filtering Approach

    PubMed Central

    Miran, Sina; Akram, Sahar; Sheikhattar, Alireza; Simon, Jonathan Z.; Zhang, Tao; Babadi, Behtash

    2018-01-01

    Humans are able to identify and track a target speaker amid a cacophony of acoustic interference, an ability which is often referred to as the cocktail party phenomenon. Results from several decades of studying this phenomenon have culminated in recent years in various promising attempts to decode the attentional state of a listener in a competing-speaker environment from non-invasive neuroimaging recordings such as magnetoencephalography (MEG) and electroencephalography (EEG). To this end, most existing approaches compute correlation-based measures by either regressing the features of each speech stream to the M/EEG channels (the decoding approach) or vice versa (the encoding approach). To produce robust results, these procedures require multiple trials for training purposes. Also, their decoding accuracy drops significantly when operating at high temporal resolutions. Thus, they are not well-suited for emerging real-time applications such as smart hearing aid devices or brain-computer interface systems, where training data might be limited and high temporal resolutions are desired. In this paper, we close this gap by developing an algorithmic pipeline for real-time decoding of the attentional state. Our proposed framework consists of three main modules: (1) Real-time and robust estimation of encoding or decoding coefficients, achieved by sparse adaptive filtering, (2) Extracting reliable markers of the attentional state, and thereby generalizing the widely-used correlation-based measures thereof, and (3) Devising a near real-time state-space estimator that translates the noisy and variable attention markers to robust and statistically interpretable estimates of the attentional state with minimal delay. Our proposed algorithms integrate various techniques including forgetting factor-based adaptive filtering, ℓ1-regularization, forward-backward splitting algorithms, fixed-lag smoothing, and Expectation Maximization. We validate the performance of our proposed framework using comprehensive simulations as well as application to experimentally acquired M/EEG data. Our results reveal that the proposed real-time algorithms perform nearly as accurately as the existing state-of-the-art offline techniques, while providing a significant degree of adaptivity, statistical robustness, and computational savings. PMID:29765298

  13. Real-Time Tracking of Selective Auditory Attention From M/EEG: A Bayesian Filtering Approach.

    PubMed

    Miran, Sina; Akram, Sahar; Sheikhattar, Alireza; Simon, Jonathan Z; Zhang, Tao; Babadi, Behtash

    2018-01-01

    Humans are able to identify and track a target speaker amid a cacophony of acoustic interference, an ability which is often referred to as the cocktail party phenomenon. Results from several decades of studying this phenomenon have culminated in recent years in various promising attempts to decode the attentional state of a listener in a competing-speaker environment from non-invasive neuroimaging recordings such as magnetoencephalography (MEG) and electroencephalography (EEG). To this end, most existing approaches compute correlation-based measures by either regressing the features of each speech stream to the M/EEG channels (the decoding approach) or vice versa (the encoding approach). To produce robust results, these procedures require multiple trials for training purposes. Also, their decoding accuracy drops significantly when operating at high temporal resolutions. Thus, they are not well-suited for emerging real-time applications such as smart hearing aid devices or brain-computer interface systems, where training data might be limited and high temporal resolutions are desired. In this paper, we close this gap by developing an algorithmic pipeline for real-time decoding of the attentional state. Our proposed framework consists of three main modules: (1) Real-time and robust estimation of encoding or decoding coefficients, achieved by sparse adaptive filtering, (2) Extracting reliable markers of the attentional state, and thereby generalizing the widely-used correlation-based measures thereof, and (3) Devising a near real-time state-space estimator that translates the noisy and variable attention markers to robust and statistically interpretable estimates of the attentional state with minimal delay. Our proposed algorithms integrate various techniques including forgetting factor-based adaptive filtering, ℓ 1 -regularization, forward-backward splitting algorithms, fixed-lag smoothing, and Expectation Maximization. We validate the performance of our proposed framework using comprehensive simulations as well as application to experimentally acquired M/EEG data. Our results reveal that the proposed real-time algorithms perform nearly as accurately as the existing state-of-the-art offline techniques, while providing a significant degree of adaptivity, statistical robustness, and computational savings.

  14. Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.

    PubMed

    Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong

    2014-09-01

    A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.

  15. Estimation of Alpine Skier Posture Using Machine Learning Techniques

    PubMed Central

    Nemec, Bojan; Petrič, Tadej; Babič, Jan; Supej, Matej

    2014-01-01

    High precision Global Navigation Satellite System (GNSS) measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier's neck. A key issue is how to estimate other more relevant parameters of the skier's body, like the center of mass (COM) and ski trajectories. Previously, these parameters were estimated by modeling the skier's body with an inverted-pendulum model that oversimplified the skier's body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier's body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing. PMID:25313492

  16. Comparison and testing of extended Kalman filters for attitude estimation of the Earth radiation budget satellite

    NASA Technical Reports Server (NTRS)

    Deutschmann, Julie; Bar-Itzhack, Itzhack Y.; Rokni, Mohammad

    1990-01-01

    The testing and comparison of two Extended Kalman Filters (EKFs) developed for the Earth Radiation Budget Satellite (ERBS) is described. One EKF updates the attitude quaternion using a four component additive error quaternion. This technique is compared to that of a second EKF, which uses a multiplicative error quaternion. A brief development of the multiplicative algorithm is included. The mathematical development of the additive EKF was presented in the 1989 Flight Mechanics/Estimation Theory Symposium along with some preliminary testing results using real spacecraft data. A summary of the additive EKF algorithm is included. The convergence properties, singularity problems, and normalization techniques of the two filters are addressed. Both filters are also compared to those from the ERBS operational ground support software, which uses a batch differential correction algorithm to estimate attitude and gyro biases. Sensitivity studies are performed on the estimation of sensor calibration states. The potential application of the EKF for real time and non-real time ground attitude determination and sensor calibration for future missions such as the Gamma Ray Observatory (GRO) and the Small Explorer Mission (SMEX) is also presented.

  17. Quantitative sonoelastography for the in vivo assessment of skeletal muscle viscoelasticity

    NASA Astrophysics Data System (ADS)

    Hoyt, Kenneth; Kneezel, Timothy; Castaneda, Benjamin; Parker, Kevin J.

    2008-08-01

    A novel quantitative sonoelastography technique for assessing the viscoelastic properties of skeletal muscle tissue was developed. Slowly propagating shear wave interference patterns (termed crawling waves) were generated using a two-source configuration vibrating normal to the surface. Theoretical models predict crawling wave displacement fields, which were validated through phantom studies. In experiments, a viscoelastic model was fit to dispersive shear wave speed sonoelastographic data using nonlinear least-squares techniques to determine frequency-independent shear modulus and viscosity estimates. Shear modulus estimates derived using the viscoelastic model were in agreement with that obtained by mechanical testing on phantom samples. Preliminary sonoelastographic data acquired in healthy human skeletal muscles confirm that high-quality quantitative elasticity data can be acquired in vivo. Studies on relaxed muscle indicate discernible differences in both shear modulus and viscosity estimates between different skeletal muscle groups. Investigations into the dynamic viscoelastic properties of (healthy) human skeletal muscles revealed that voluntarily contracted muscles exhibit considerable increases in both shear modulus and viscosity estimates as compared to the relaxed state. Overall, preliminary results are encouraging and quantitative sonoelastography may prove clinically feasible for in vivo characterization of the dynamic viscoelastic properties of human skeletal muscle.

  18. Importance of Preserving Cross-correlation in developing Statistically Downscaled Climate Forcings and in estimating Land-surface Fluxes and States

    NASA Astrophysics Data System (ADS)

    Das Bhowmik, R.; Arumugam, S.

    2015-12-01

    Multivariate downscaling techniques exhibited superiority over univariate regression schemes in terms of preserving cross-correlations between multiple variables- precipitation and temperature - from GCMs. This study focuses on two aspects: (a) develop an analytical solutions on estimating biases in cross-correlations from univariate downscaling approaches and (b) quantify the uncertainty in land-surface states and fluxes due to biases in cross-correlations in downscaled climate forcings. Both these aspects are evaluated using climate forcings available from both historical climate simulations and CMIP5 hindcasts over the entire US. The analytical solution basically relates the univariate regression parameters, co-efficient of determination of regression and the co-variance ratio between GCM and downscaled values. The analytical solutions are compared with the downscaled univariate forcings by choosing the desired p-value (Type-1 error) in preserving the observed cross-correlation. . For quantifying the impacts of biases on cross-correlation on estimating streamflow and groundwater, we corrupt the downscaled climate forcings with different cross-correlation structure.

  19. Imaging brain microstructure with diffusion MRI: practicality and applications.

    PubMed

    Alexander, Daniel C; Dyrby, Tim B; Nilsson, Markus; Zhang, Hui

    2017-11-29

    This article gives an overview of microstructure imaging of the brain with diffusion MRI and reviews the state of the art. The microstructure-imaging paradigm aims to estimate and map microscopic properties of tissue using a model that links these properties to the voxel scale MR signal. Imaging techniques of this type are just starting to make the transition from the technical research domain to wide application in biomedical studies. We focus here on the practicalities of both implementing such techniques and using them in applications. Specifically, the article summarizes the relevant aspects of brain microanatomy and the range of diffusion-weighted MR measurements that provide sensitivity to them. It then reviews the evolution of mathematical and computational models that relate the diffusion MR signal to brain tissue microstructure, as well as the expanding areas of application. Next we focus on practicalities of designing a working microstructure imaging technique: model selection, experiment design, parameter estimation, validation, and the pipeline of development of this class of technique. The article concludes with some future perspectives on opportunities in this topic and expectations on how the field will evolve in the short-to-medium term. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Effects of measurement unobservability on neural extended Kalman filter tracking

    NASA Astrophysics Data System (ADS)

    Stubberud, Stephen C.; Kramer, Kathleen A.

    2009-05-01

    An important component of tracking fusion systems is the ability to fuse various sensors into a coherent picture of the scene. When multiple sensor systems are being used in an operational setting, the types of data vary. A significant but often overlooked concern of multiple sensors is the incorporation of measurements that are unobservable. An unobservable measurement is one that may provide information about the state, but cannot recreate a full target state. A line of bearing measurement, for example, cannot provide complete position information. Often, such measurements come from passive sensors such as a passive sonar array or an electronic surveillance measure (ESM) system. Unobservable measurements will, over time, result in the measurement uncertainty to grow without bound. While some tracking implementations have triggers to protect against the detrimental effects, many maneuver tracking algorithms avoid discussing this implementation issue. One maneuver tracking technique is the neural extended Kalman filter (NEKF). The NEKF is an adaptive estimation algorithm that estimates the target track as it trains a neural network on line to reduce the error between the a priori target motion model and the actual target dynamics. The weights of neural network are trained in a similar method to the state estimation/parameter estimation Kalman filter techniques. The NEKF has been shown to improve target tracking accuracy through maneuvers and has been use to predict target behavior using the new model that consists of the a priori model and the neural network. The key to the on-line adaptation of the NEKF is the fact that the neural network is trained using the same residuals as the Kalman filter for the tracker. The neural network weights are treated as augmented states to the target track. Through the state-coupling function, the weights are coupled to the target states. Thus, if the measurements cause the states of the target track to be unobservable, then the weights of the neural network have unobservable modes as well. In recent analysis, the NEKF was shown to have a significantly larger growth in the eigenvalues of the error covariance matrix than the standard EKF tracker when the measurements were purely bearings-only. This caused detrimental effects to the ability of the NEKF to model the target dynamics. In this work, the analysis is expanded to determine the detrimental effects of bearings-only measurements of various uncertainties on the performance of the NEKF when these unobservable measurements are interlaced with completely observable measurements. This analysis provides the ability to put implementation limitations on the NEKF when bearings-only sensors are present.

  1. Radar Polarimetry: Theory, Analysis, and Applications

    NASA Astrophysics Data System (ADS)

    Hubbert, John Clark

    The fields of radar polarimetry and optical polarimetry are compared. The mathematics of optic polarimetry are formulated such that a local right handed coordinate system is always used to describe the polarization states. This is not done in radar polarimetry. Radar optimum polarization theory is redeveloped within the framework of optical polarimetry. The radar optimum polarizations and optic eigenvalues of common scatterers are compared. In addition a novel definition of an eigenpolarization state is given and the accompanying mathematics is developed. The polarization response calculated using optic, radar and novel definitions is presented for a variety of scatterers. Polarimetric transformation provides a means to characterize scatters in more than one polarization basis. Polarimetric transformation for an ensemble of scatters is obtained via two methods: (1) the covariance method and (2) the instantaneous scattering matrix (ISM) method. The covariance method is used to relate the mean radar parameters of a +/-45^circ linear polarization basis to those of a horizontal and vertical polarization basis. In contrast the ISM method transforms the individual time samples. Algorithms are developed for transforming the time series from fully polarimetric radars that switch between orthogonal states. The transformed time series are then used to calculate the mean radar parameters of interest. It is also shown that propagation effects do not need to be removed from the ISM's before transformation. The techniques are demonstrated using data collected by POLDIRAD, the German Aerospace Research Establishment's fully polarimetric C-band radar. The differential phase observed between two copolar states, Psi_{CO}, is composed of two phases: (1) differential propagation phase, phi_{DP}, and (2) differential backscatter phase, delta. The slope of phi_{DP } with range is an estimate of the specific differential phase, K_{DP}. The process of estimating K_{DP} is complicated when delta is present. Algorithms are presented for estimating delta and K_{DP} from range profiles of Psi_ {CO}. Also discussed are procedures for the estimation and interpretation of other radar measurables such as reflectivity, Z_{HH}, differential reflectivity, Z_{DR }, the magnitude of the copolar correlation coefficient, rho_{HV}(0), and Doppler spectrum width, sigma _{v}. The techniques are again illustrated with data collected by POLDIRAD.

  2. System health monitoring using multiple-model adaptive estimation techniques

    NASA Astrophysics Data System (ADS)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.

  3. Short-Arc Analysis of Intersatellite Tracking Data in a Gravity Mapping Mission

    NASA Technical Reports Server (NTRS)

    Rowlands, David D.; Ray, Richard D.; Chinn, Douglas S.; Lemoine, Frank G.; Smith, David E. (Technical Monitor)

    2001-01-01

    A technique for the analysis of low-low intersatellite range-rate data in a gravity mapping mission is explored. The technique is based on standard tracking data analysis for orbit determination but uses a spherical coordinate representation of the 12 epoch state parameters describing the baseline between the two satellites. This representation of the state parameters is exploited to allow the intersatellite range-rate analysis to benefit from information provided by other tracking data types without large simultaneous multiple data type solutions. The technique appears especially valuable for estimating gravity from short arcs (e.g., less than 15 minutes) of data. Gravity recovery simulations which use short arcs are compared with those using arcs a day in length. For a high-inclination orbit, the short-arc analysis recovers low-order gravity coefficients remarkably well, although higher order terms, especially sectorial terms, are less accurate. Simulations suggest that either long or short arcs of GRACE data are likely to improve parts of the geopotential spectrum by orders of magnitude.

  4. Technique of estimation of actual strength of a gas pipeline section at its deformation in landslide action zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tcherni, V.P.

    1996-12-31

    The technique is given which permits determination of stress and strain state (SSS) and estimation of actual strength of a section of a buried main gas pipeline (GP) in the case of its deformation in a landslide action zone. The technique is based on the use of three-dimensional coordinates of axial points of the deformed GP section. These coordinates are received by a full-scale survey. The deformed axis of the surveyed GP section is described by the polynomial. The unknown coefficients of the polynomial can be determined from the boundary conditions at points of connection with contiguous undeformed sections asmore » well as by use of minimization methods in mathematical processing of full-scale survey results. The resulting form of GP section`s axis allows one to determine curvatures and, accordingly, bending moments along all the length of the considered section. The influence of soil resistance to longitudinal displacements of a pipeline is used to determine longitudinal forces. Resulting values of bending moments and axial forces as well as the known value of internal pressure are used to analyze all necessary components of an actual SSS of pipeline section and to estimate its strength by elastic analysis.« less

  5. SEASONAL AND REGIONAL VARIATIONS OF PRIMARY AND SECONDARY ORGANIC AEROSOLS OVER THE CONTINENTAL UNITED STATES: SEMI-EMPIRICAL ESTIMATES AND MODEL EVALUATION

    EPA Science Inventory

    Seasonal and regional variations of primary (OCpri) and secondary (OCsec) organic carbon aerosols across the continental U.S. for the year 2001 were examined by a semi-empirical technique using observed OC and elemental carbon (EC) data from 142 routine moni...

  6. Variable Selection Strategies for Small-area Estimation Using FIA Plots and Remotely Sensed Data

    Treesearch

    Andrew Lister; Rachel Riemann; James Westfall; Mike Hoppus

    2005-01-01

    The USDA Forest Service's Forest Inventory and Analysis (FIA) unit maintains a network of tens of thousands of georeferenced forest inventory plots distributed across the United States. Data collected on these plots include direct measurements of tree diameter and height and other variables. We present a technique by which FIA plot data and coregistered...

  7. Land use change monitoring in Maryland using a probabilistic sample and rapid photointerpretation

    Treesearch

    Tonya Lister; Andrew Lister; Eunice Alexander

    2014-01-01

    The U.S. state of Maryland needs to monitor land use change in order to address land management objectives. This paper presents a change detection method that, through automation and standard geographic information system (GIS) techniques, facilitates the estimation of landscape change via photointerpretation. Using the protocols developed, we show a net loss of forest...

  8. Autonomous Object Characterization with Large Datasets

    DTIC Science & Technology

    2015-10-18

    desk, where a substantial amount of effort is required to transform raw photometry into a data product, minimizing the amount of time the analyst has...were used to explore concepts in satellite characterization and satellite state change. The first algorithm provides real- time stability estimation... Timely and effective space object (SO) characterization is a challenge, and requires advanced data processing techniques. Detection and identification

  9. Statistical Orbit Determination using the Particle Filter for Incorporating Non-Gaussian Uncertainties

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell

    2012-01-01

    The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.

  10. Machine learning and microsimulation techniques on the prognosis of dementia: A systematic literature review.

    PubMed

    Dallora, Ana Luiza; Eivazzadeh, Shahryar; Mendes, Emilia; Berglund, Johan; Anderberg, Peter

    2017-01-01

    Dementia is a complex disorder characterized by poor outcomes for the patients and high costs of care. After decades of research little is known about its mechanisms. Having prognostic estimates about dementia can help researchers, patients and public entities in dealing with this disorder. Thus, health data, machine learning and microsimulation techniques could be employed in developing prognostic estimates for dementia. The goal of this paper is to present evidence on the state of the art of studies investigating and the prognosis of dementia using machine learning and microsimulation techniques. To achieve our goal we carried out a systematic literature review, in which three large databases-Pubmed, Socups and Web of Science were searched to select studies that employed machine learning or microsimulation techniques for the prognosis of dementia. A single backward snowballing was done to identify further studies. A quality checklist was also employed to assess the quality of the evidence presented by the selected studies, and low quality studies were removed. Finally, data from the final set of studies were extracted in summary tables. In total 37 papers were included. The data summary results showed that the current research is focused on the investigation of the patients with mild cognitive impairment that will evolve to Alzheimer's disease, using machine learning techniques. Microsimulation studies were concerned with cost estimation and had a populational focus. Neuroimaging was the most commonly used variable. Prediction of conversion from MCI to AD is the dominant theme in the selected studies. Most studies used ML techniques on Neuroimaging data. Only a few data sources have been recruited by most studies and the ADNI database is the one most commonly used. Only two studies have investigated the prediction of epidemiological aspects of Dementia using either ML or MS techniques. Finally, care should be taken when interpreting the reported accuracy of ML techniques, given studies' different contexts.

  11. Machine learning and microsimulation techniques on the prognosis of dementia: A systematic literature review

    PubMed Central

    Mendes, Emilia; Berglund, Johan; Anderberg, Peter

    2017-01-01

    Background Dementia is a complex disorder characterized by poor outcomes for the patients and high costs of care. After decades of research little is known about its mechanisms. Having prognostic estimates about dementia can help researchers, patients and public entities in dealing with this disorder. Thus, health data, machine learning and microsimulation techniques could be employed in developing prognostic estimates for dementia. Objective The goal of this paper is to present evidence on the state of the art of studies investigating and the prognosis of dementia using machine learning and microsimulation techniques. Method To achieve our goal we carried out a systematic literature review, in which three large databases—Pubmed, Socups and Web of Science were searched to select studies that employed machine learning or microsimulation techniques for the prognosis of dementia. A single backward snowballing was done to identify further studies. A quality checklist was also employed to assess the quality of the evidence presented by the selected studies, and low quality studies were removed. Finally, data from the final set of studies were extracted in summary tables. Results In total 37 papers were included. The data summary results showed that the current research is focused on the investigation of the patients with mild cognitive impairment that will evolve to Alzheimer’s disease, using machine learning techniques. Microsimulation studies were concerned with cost estimation and had a populational focus. Neuroimaging was the most commonly used variable. Conclusions Prediction of conversion from MCI to AD is the dominant theme in the selected studies. Most studies used ML techniques on Neuroimaging data. Only a few data sources have been recruited by most studies and the ADNI database is the one most commonly used. Only two studies have investigated the prediction of epidemiological aspects of Dementia using either ML or MS techniques. Finally, care should be taken when interpreting the reported accuracy of ML techniques, given studies’ different contexts. PMID:28662070

  12. Least-squares analysis of the Mueller matrix.

    PubMed

    Reimer, Michael; Yevick, David

    2006-08-15

    In a single-mode fiber excited by light with a fixed polarization state, the output polarizations obtained at two different optical frequencies are related by a Mueller matrix. We examine least-squares procedures for estimating this matrix from repeated measurements of the output Stokes vector for a random set of input polarization states. We then apply these methods to the determination of polarization mode dispersion and polarization-dependent loss in an optical fiber. We find that a relatively simple formalism leads to results that are comparable with those of far more involved techniques.

  13. Gravity Field Characterization around Small Bodies

    NASA Astrophysics Data System (ADS)

    Takahashi, Yu

    A small body rendezvous mission requires accurate gravity field characterization for safe, accurate navigation purposes. However, the current techniques of gravity field modeling around small bodies are not achieved to the level of satisfaction. This thesis will address how the process of current gravity field characterization can be made more robust for future small body missions. First we perform the covariance analysis around small bodies via multiple slow flybys. Flyby characterization requires less laborious scheduling than its orbit counterpart, simultaneously reducing the risk of impact into the asteroid's surface. It will be shown that the level of initial characterization that can occur with this approach is no less than the orbit approach. Next, we apply the same technique of gravity field characterization to estimate the spin state of 4179 Touatis, which is a near-Earth asteroid in close to 4:1 resonance with the Earth. The data accumulated from 1992-2008 are processed in a least-squares filter to predict Toutatis' orientation during the 2012 apparition. The center-of-mass offset and the moments of inertia estimated thereof can be used to constrain the internal density distribution within the body. Then, the spin state estimation is developed to a generalized method to estimate the internal density distribution within a small body. The density distribution is estimated from the orbit determination solution of the gravitational coefficients. It will be shown that the surface gravity field reconstructed from the estimated density distribution yields higher accuracy than the conventional gravity field models. Finally, we will investigate two types of relatively unknown gravity fields, namely the interior gravity field and interior spherical Bessel gravity field, in order to investigate how accurately the surface gravity field can be mapped out for proximity operations purposes. It will be shown that these formulations compute the surface gravity field with unprecedented accuracy for a well-chosen set of parametric settings, both regionally and globally.

  14. Methods for the accurate estimation of confidence intervals on protein folding ϕ-values

    PubMed Central

    Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.

    2006-01-01

    ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714

  15. Using deep learning and Google Street View to estimate the demographic makeup of neighborhoods across the United States

    PubMed Central

    Gebru, Timnit; Krause, Jonathan; Wang, Yilun; Chen, Duyun; Deng, Jia; Aiden, Erez Lieberman; Fei-Fei, Li

    2017-01-01

    The United States spends more than $250 million each year on the American Community Survey (ACS), a labor-intensive door-to-door study that measures statistics relating to race, gender, education, occupation, unemployment, and other demographic factors. Although a comprehensive source of data, the lag between demographic changes and their appearance in the ACS can exceed several years. As digital imagery becomes ubiquitous and machine vision techniques improve, automated data analysis may become an increasingly practical supplement to the ACS. Here, we present a method that estimates socioeconomic characteristics of regions spanning 200 US cities by using 50 million images of street scenes gathered with Google Street View cars. Using deep learning-based computer vision techniques, we determined the make, model, and year of all motor vehicles encountered in particular neighborhoods. Data from this census of motor vehicles, which enumerated 22 million automobiles in total (8% of all automobiles in the United States), were used to accurately estimate income, race, education, and voting patterns at the zip code and precinct level. (The average US precinct contains ∼1,000 people.) The resulting associations are surprisingly simple and powerful. For instance, if the number of sedans encountered during a drive through a city is higher than the number of pickup trucks, the city is likely to vote for a Democrat during the next presidential election (88% chance); otherwise, it is likely to vote Republican (82%). Our results suggest that automated systems for monitoring demographics may effectively complement labor-intensive approaches, with the potential to measure demographics with fine spatial resolution, in close to real time. PMID:29183967

  16. Using deep learning and Google Street View to estimate the demographic makeup of neighborhoods across the United States.

    PubMed

    Gebru, Timnit; Krause, Jonathan; Wang, Yilun; Chen, Duyun; Deng, Jia; Aiden, Erez Lieberman; Fei-Fei, Li

    2017-12-12

    The United States spends more than $250 million each year on the American Community Survey (ACS), a labor-intensive door-to-door study that measures statistics relating to race, gender, education, occupation, unemployment, and other demographic factors. Although a comprehensive source of data, the lag between demographic changes and their appearance in the ACS can exceed several years. As digital imagery becomes ubiquitous and machine vision techniques improve, automated data analysis may become an increasingly practical supplement to the ACS. Here, we present a method that estimates socioeconomic characteristics of regions spanning 200 US cities by using 50 million images of street scenes gathered with Google Street View cars. Using deep learning-based computer vision techniques, we determined the make, model, and year of all motor vehicles encountered in particular neighborhoods. Data from this census of motor vehicles, which enumerated 22 million automobiles in total (8% of all automobiles in the United States), were used to accurately estimate income, race, education, and voting patterns at the zip code and precinct level. (The average US precinct contains ∼1,000 people.) The resulting associations are surprisingly simple and powerful. For instance, if the number of sedans encountered during a drive through a city is higher than the number of pickup trucks, the city is likely to vote for a Democrat during the next presidential election (88% chance); otherwise, it is likely to vote Republican (82%). Our results suggest that automated systems for monitoring demographics may effectively complement labor-intensive approaches, with the potential to measure demographics with fine spatial resolution, in close to real time. Copyright © 2017 the Author(s). Published by PNAS.

  17. Correlation between resting state fMRI total neuronal activity and PET metabolism in healthy controls and patients with disorders of consciousness.

    PubMed

    Soddu, Andrea; Gómez, Francisco; Heine, Lizette; Di Perri, Carol; Bahri, Mohamed Ali; Voss, Henning U; Bruno, Marie-Aurélie; Vanhaudenhuyse, Audrey; Phillips, Christophe; Demertzi, Athena; Chatelle, Camille; Schrouff, Jessica; Thibaut, Aurore; Charland-Verville, Vanessa; Noirhomme, Quentin; Salmon, Eric; Tshibanda, Jean-Flory Luaba; Schiff, Nicholas D; Laureys, Steven

    2016-01-01

    The mildly invasive 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) is a well-established imaging technique to measure 'resting state' cerebral metabolism. This technique made it possible to assess changes in metabolic activity in clinical applications, such as the study of severe brain injury and disorders of consciousness. We assessed the possibility of creating functional MRI activity maps, which could estimate the relative levels of activity in FDG-PET cerebral metabolic maps. If no metabolic absolute measures can be extracted, our approach may still be of clinical use in centers without access to FDG-PET. It also overcomes the problem of recognizing individual networks of independent component selection in functional magnetic resonance imaging (fMRI) resting state analysis. We extracted resting state fMRI functional connectivity maps using independent component analysis and combined only components of neuronal origin. To assess neuronality of components a classification based on support vector machine (SVM) was used. We compared the generated maps with the FDG-PET maps in 16 healthy controls, 11 vegetative state/unresponsive wakefulness syndrome patients and four locked-in patients. The results show a significant similarity with ρ = 0.75 ± 0.05 for healthy controls and ρ = 0.58 ± 0.09 for vegetative state/unresponsive wakefulness syndrome patients between the FDG-PET and the fMRI based maps. FDG-PET, fMRI neuronal maps, and the conjunction analysis show decreases in frontoparietal and medial regions in vegetative patients with respect to controls. Subsequent analysis in locked-in syndrome patients produced also consistent maps with healthy controls. The constructed resting state fMRI functional connectivity map points toward the possibility for fMRI resting state to estimate relative levels of activity in a metabolic map.

  18. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    NASA Astrophysics Data System (ADS)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  19. Boundary methods for mode estimation

    NASA Astrophysics Data System (ADS)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  20. Estimation of Land Surface Fluxes and Their Uncertainty via Variational Data Assimilation Approach

    NASA Astrophysics Data System (ADS)

    Abdolghafoorian, A.; Farhadi, L.

    2016-12-01

    Accurate estimation of land surface heat and moisture fluxes as well as root zone soil moisture is crucial in various hydrological, meteorological, and agricultural applications. "In situ" measurements of these fluxes are costly and cannot be readily scaled to large areas relevant to weather and climate studies. Therefore, there is a need for techniques to make quantitative estimates of heat and moisture fluxes using land surface state variables. In this work, we applied a novel approach based on the variational data assimilation (VDA) methodology to estimate land surface fluxes and soil moisture profile from the land surface states. This study accounts for the strong linkage between terrestrial water and energy cycles by coupling the dual source energy balance equation with the water balance equation through the mass flux of evapotranspiration (ET). Heat diffusion and moisture diffusion into the column of soil are adjoined to the cost function as constraints. This coupling results in more accurate prediction of land surface heat and moisture fluxes and consequently soil moisture at multiple depths with high temporal frequency as required in many hydrological, environmental and agricultural applications. One of the key limitations of VDA technique is its tendency to be ill-posed, meaning that a continuum of possibilities exists for different parameters that produce essentially identical measurement-model misfit errors. On the other hand, the value of heat and moisture flux estimation to decision-making processes is limited if reasonable estimates of the corresponding uncertainty are not provided. In order to address these issues, in this research uncertainty analysis will be performed to estimate the uncertainty of retrieved fluxes and root zone soil moisture. The assimilation algorithm is tested with a series of experiments using a synthetic data set generated by the simultaneous heat and water (SHAW) model. We demonstrate the VDA performance by comparing the (synthetic) true measurements (including profile of soil moisture and temperature, land surface water and heat fluxes, and root water uptake) with VDA estimates. In addition, the feasibility of extending the proposed approach to use remote sensing observations is tested by limiting the number of LST observations and soil moisture observations.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Robert N.; Urban, Marie L.; Duchscherer, Samantha E.

    Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the artmore » by introducing the Population Data Tables (PDT), a Bayesian based informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000ft2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the art by introducing the Population Data Tables (PDT), a Bayesian model and informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000 ft 2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.« less

  2. Estimating soil hydraulic parameters from transient flow experiments in a centrifuge using parameter optimization technique

    USGS Publications Warehouse

    Šimůnek, Jirka; Nimmo, John R.

    2005-01-01

    A modified version of the Hydrus software package that can directly or inversely simulate water flow in a transient centrifugal field is presented. The inverse solver for parameter estimation of the soil hydraulic parameters is then applied to multirotation transient flow experiments in a centrifuge. Using time‐variable water contents measured at a sequence of several rotation speeds, soil hydraulic properties were successfully estimated by numerical inversion of transient experiments. The inverse method was then evaluated by comparing estimated soil hydraulic properties with those determined independently using an equilibrium analysis. The optimized soil hydraulic properties compared well with those determined using equilibrium analysis and steady state experiment. Multirotation experiments in a centrifuge not only offer significant time savings by accelerating time but also provide significantly more information for the parameter estimation procedure compared to multistep outflow experiments in a gravitational field.

  3. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.

  4. pathChirp: Efficient Available Bandwidth Estimation for Network Paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cottrell, Les

    2003-04-30

    This paper presents pathChirp, a new active probing tool for estimating the available bandwidth on a communication network path. Based on the concept of ''self-induced congestion,'' pathChirp features an exponential flight pattern of probes we call a chirp. Packet chips offer several significant advantages over current probing schemes based on packet pairs or packet trains. By rapidly increasing the probing rate within each chirp, pathChirp obtains a rich set of information from which to dynamically estimate the available bandwidth. Since it uses only packet interarrival times for estimation, pathChirp does not require synchronous nor highly stable clocks at the sendermore » and receiver. We test pathChirp with simulations and Internet experiments and find that it provides good estimates of the available bandwidth while using only a fraction of the number of probe bytes that current state-of-the-art techniques use.« less

  5. Earth science research

    NASA Technical Reports Server (NTRS)

    Botkin, Daniel B.

    1987-01-01

    The analysis of ground-truth data from the boreal forest plots in the Superior National Forest, Minnesota, was completed. Development of statistical methods was completed for dimension analysis (equations to estimate the biomass of trees from measurements of diameter and height). The dimension-analysis equations were applied to the data obtained from ground-truth plots, to estimate the biomass. Classification and analyses of remote sensing images of the Superior National Forest were done as a test of the technique to determine forest biomass and ecological state by remote sensing. Data was archived on diskette and tape and transferred to UCSB to be used in subsequent research.

  6. Elevation of a cane-growing area of the state of Sao Paulo using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Mendonca, F. J.; Lee, D. C. L.; Tardin, A. T.; Shimabukuro, Y. E.; Chen, S. C.; Lucht, L. A. M.; Moreira, M. A.; Delima, A. M.; Maia, F. C. S.

    1981-01-01

    Images at a scale of 1:250.000 were visually interpreted for identification and area estimates of sugar cane plantations in Sao Paulo. The basic criteria for crop identification were the spectral characteristics of channels 5 and 7 and their temporal variations observed from different LANDSAT passes. Using this technique, it was possible to map the sugar cane areas as well as the sugar cane already harvested. An area of 801,950 hectares was estimated within the study area. The confidence interval of correct classification ranged from 87.11% to 94.71%.

  7. Local neighborhood transition probability estimation and its use in contextual classification

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of incorporating spatial or contextual information into classifications is considered. A simple model that describes the spatial dependencies between the neighboring pixels with a single parameter, Theta, is presented. Expressions are derived for updating the posteriori probabilities of the states of nature of the pattern under consideration using information from the neighboring patterns, both for spatially uniform context and for Markov dependencies in terms of Theta. Techniques for obtaining the optimal value of the parameter Theta as a maximum likelihood estimate from the local neighborhood of the pattern under consideration are developed.

  8. Dynamic Strain Measurements on Automotive and Aeronautic Composite Components by Means of Embedded Fiber Bragg Grating Sensors

    PubMed Central

    Lamberti, Alfredo; Chiesura, Gabriele; Luyckx, Geert; Degrieck, Joris; Kaufmann, Markus; Vanlanduit, Steve

    2015-01-01

    The measurement of the internal deformations occurring in real-life composite components is a very challenging task, especially for those components that are rather difficult to access. Optical fiber sensors can overcome such a problem, since they can be embedded in the composite materials and serve as in situ sensors. In this article, embedded optical fiber Bragg grating (FBG) sensors are used to analyze the vibration characteristics of two real-life composite components. The first component is a carbon fiber-reinforced polymer automotive control arm; the second is a glass fiber-reinforced polymer aeronautic hinge arm. The modal parameters of both components were estimated by processing the FBG signals with two interrogation techniques: the maximum detection and fast phase correlation algorithms were employed for the demodulation of the FBG signals; the Peak-Picking and PolyMax techniques were instead used for the parameter estimation. To validate the FBG outcomes, reference measurements were performed by means of a laser Doppler vibrometer. The analysis of the results showed that the FBG sensing capabilities were enhanced when the recently-introduced fast phase correlation algorithm was combined with the state-of-the-art PolyMax estimator curve fitting method. In this case, the FBGs provided the most accurate results, i.e., it was possible to fully characterize the vibration behavior of both composite components. When using more traditional interrogation algorithms (maximum detection) and modal parameter estimation techniques (Peak-Picking), some of the modes were not successfully identified. PMID:26516854

  9. Multi-level methods and approximating distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E.

    2016-07-15

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparablemore » to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.« less

  10. Decomposition Technique for Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)

    2014-01-01

    The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.

  11. Quantifying uncertainty in soot volume fraction estimates using Bayesian inference of auto-correlated laser-induced incandescence measurements

    NASA Astrophysics Data System (ADS)

    Hadwin, Paul J.; Sipkens, T. A.; Thomson, K. A.; Liu, F.; Daun, K. J.

    2016-01-01

    Auto-correlated laser-induced incandescence (AC-LII) infers the soot volume fraction (SVF) of soot particles by comparing the spectral incandescence from laser-energized particles to the pyrometrically inferred peak soot temperature. This calculation requires detailed knowledge of model parameters such as the absorption function of soot, which may vary with combustion chemistry, soot age, and the internal structure of the soot. This work presents a Bayesian methodology to quantify such uncertainties. This technique treats the additional "nuisance" model parameters, including the soot absorption function, as stochastic variables and incorporates the current state of knowledge of these parameters into the inference process through maximum entropy priors. While standard AC-LII analysis provides a point estimate of the SVF, Bayesian techniques infer the posterior probability density, which will allow scientists and engineers to better assess the reliability of AC-LII inferred SVFs in the context of environmental regulations and competing diagnostics.

  12. Low-head hydropower assessment of the Brazilian State of São Paulo

    USGS Publications Warehouse

    Artan, Guleid A.; Cushing, W. Matthew; Mathis, Melissa L.; Tieszen, Larry L.

    2014-01-01

    This study produced a comprehensive estimate of the magnitude of hydropower potential available in the streams that drain watersheds entirely within the State of São Paulo, Brazil. Because a large part of the contributing area is outside of São Paulo, the main stem of the Paraná River was excluded from the assessment. Potential head drops were calculated from the Digital Terrain Elevation Data,which has a 1-arc-second resolution (approximately 30-meter resolution at the equator). For the conditioning and validation of synthetic stream channels derived from the Digital Elevation Model datasets, hydrography data (in digital format) supplied by the São Paulo State Department of Energy and the Agência Nacional de Águas were used. Within the study area there were 1,424 rain gages and 123 streamgages with long-term data records. To estimate average yearly streamflow, a hydrologic regionalization system that divides the State into 21 homogeneous basins was used. Stream segments, upstream areas, and mean annual rainfall were estimated using geographic information systems techniques. The accuracy of the flows estimated with the regionalization models was validated. Overall, simulated streamflows were significantly correlated with the observed flows but with a consistent underestimation bias. When the annual mean flows from the regionalization models were adjusted upward by 10 percent, average streamflow estimation bias was reduced from -13 percent to -4 percent. The sum of all the validated stream reach mean annual hydropower potentials in the 21 basins is 7,000 megawatts (MW). Hydropower potential is mainly concentrated near the Serra do Mar mountain range and along the Tietê River. The power potential along the Tietê River is mainly at sites with medium and high potentials, sites where hydropower has already been harnessed. In addition to the annual mean hydropower estimates, potential hydropower estimates with flow rates with exceedance probabilities of 40 percent, 60 percent, and 90 percent were made.

  13. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.

  14. Egg banking in the United States: current status of commercially available cryopreserved oocytes.

    PubMed

    Quaas, Alexander M; Melamed, Alexander; Chung, Karine; Bendikson, Kristin A; Paulson, Richard J

    2013-03-01

    To estimate the current availability of donor cryopreserved oocytes and to describe the emerging phenomenon of commercial egg banks (CEBs) in the United States. Cross-sectional survey of CEBs. E-mail, telephone, and fax survey of all CEB scientific directors, conducted April 2012. None. None. Number and location of CEBs in the United States, years in existence, number of donors, number of available oocytes, level of donor anonymity, donor screening, cost of oocytes to recipients, freezing/thawing technique, pregnancy statistics. Seven CEBs were identified and surveyed (response rate: 100%). The CEBs used three distinct operational models, had been in existence for a median of 2 years (range: 1 to 8 years), with a median 21.5 (range: 6 to 100) donors and 120 (range: 20 to 1,000) currently available oocytes. The median recommended minimum number of eggs to obtain was six (range: four to seven), at an estimated mean cost per oocyte of $2,225 (range: $1,500 to $2,500). An estimated 3,130 oocytes from 294 donors are currently stored for future use. Of these CEBs, 6 (86%) of 7 use vitrification as cryopreservation method. To date, 8,780 frozen donor oocytes from CEBs have been used for in vitro fertilization, resulting in 602 pregnancies. Pregnancy rates per oocyte, available for 5 (71%) of 7 CEBs, were 532 (7.5%) of 7,080 for CEBs using vitrification and 70 (10%) of 700 for the single CEB using slow freezing as cryopreservation method. Frozen donor eggs are currently widely available in the United States. Three different operational models are currently used, resulting in more than 600 pregnancies from oocytes obtained at CEBs. The majority of CEBs use vitrification as cryopreservation technique. Copyright © 2013 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  15. Smoothing-based compressed state Kalman filter for joint state-parameter estimation: Applications in reservoir characterization and CO2 storage monitoring

    NASA Astrophysics Data System (ADS)

    Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.

    2017-08-01

    The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.

  16. Spatial characterization of the edge barrier in wide superconducting films

    NASA Astrophysics Data System (ADS)

    Sivakov, A. G.; Turutanov, O. G.; Kolinko, A. E.; Pokhila, A. S.

    2018-03-01

    The current-induced destruction of superconductivity is discussed in wide superconducting thin films, whose width is greater than the magnetic field penetration depth, in weak magnetic fields. Particular attention is paid to the role of the boundary potential barrier (the Bin-Livingston barrier) in critical state formation and detection of the edge responsible for this critical state with different mutual orientations of external perpendicular magnetic field and transport current. Critical and resistive states of the film were visualized using the space-resolving low-temperature laser scanning microscopy (LTLSM) method, which enables detection of critical current-determining areas on the film edges. Based on these observations, a simple technique was developed for investigation of the critical state separately at each film edge, and for the estimation of residual magnetic fields in cryostats. The proposed method only requires recording of the current-voltage characteristics of the film in a weak magnetic field, thus circumventing the need for complex LTLSM techniques. Information thus obtained is particularly important for interpretation of studies of superconducting film single-photon light emission detectors.

  17. HT2DINV: A 2D forward and inverse code for steady-state and transient hydraulic tomography problems

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Jardani, A.; Revil, A.; Dupont, J. P.

    2015-12-01

    Hydraulic tomography is a technique used to characterize the spatial heterogeneities of storativity and transmissivity fields. The responses of an aquifer to a source of hydraulic stimulations are used to recover the features of the estimated fields using inverse techniques. We developed a 2D free source Matlab package for performing hydraulic tomography analysis in steady state and transient regimes. The package uses the finite elements method to solve the ground water flow equation for simple or complex geometries accounting for the anisotropy of the material properties. The inverse problem is based on implementing the geostatistical quasi-linear approach of Kitanidis combined with the adjoint-state method to compute the required sensitivity matrices. For undetermined inverse problems, the adjoint-state method provides a faster and more accurate approach for the evaluation of sensitivity matrices compared with the finite differences method. Our methodology is organized in a way that permits the end-user to activate parallel computing in order to reduce the computational burden. Three case studies are investigated demonstrating the robustness and efficiency of our approach for inverting hydraulic parameters.

  18. Fitting mechanistic epidemic models to data: A comparison of simple Markov chain Monte Carlo approaches.

    PubMed

    Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M

    2018-07-01

    Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).

  19. Analysis of filter tuning techniques for sequential orbit determination

    NASA Technical Reports Server (NTRS)

    Lee, T.; Yee, C.; Oza, D.

    1995-01-01

    This paper examines filter tuning techniques for a sequential orbit determination (OD) covariance analysis. Recently, there has been a renewed interest in sequential OD, primarily due to the successful flight qualification of the Tracking and Data Relay Satellite System (TDRSS) Onboard Navigation System (TONS) using Doppler data extracted onboard the Extreme Ultraviolet Explorer (EUVE) spacecraft. TONS computes highly accurate orbit solutions onboard the spacecraft in realtime using a sequential filter. As the result of the successful TONS-EUVE flight qualification experiment, the Earth Observing System (EOS) AM-1 Project has selected TONS as the prime navigation system. In addition, sequential OD methods can be used successfully for ground OD. Whether data are processed onboard or on the ground, a sequential OD procedure is generally favored over a batch technique when a realtime automated OD system is desired. Recently, OD covariance analyses were performed for the TONS-EUVE and TONS-EOS missions using the sequential processing options of the Orbit Determination Error Analysis System (ODEAS). ODEAS is the primary covariance analysis system used by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD). The results of these analyses revealed a high sensitivity of the OD solutions to the state process noise filter tuning parameters. The covariance analysis results show that the state estimate error contributions from measurement-related error sources, especially those due to the random noise and satellite-to-satellite ionospheric refraction correction errors, increase rapidly as the state process noise increases. These results prompted an in-depth investigation of the role of the filter tuning parameters in sequential OD covariance analysis. This paper analyzes how the spacecraft state estimate errors due to dynamic and measurement-related error sources are affected by the process noise level used. This information is then used to establish guidelines for determining optimal filter tuning parameters in a given sequential OD scenario for both covariance analysis and actual OD. Comparisons are also made with corresponding definitive OD results available from the TONS-EUVE analysis.

  20. Probability of introducing foot and mouth disease into the United States via live animal importation.

    PubMed

    Miller, G Y; Ming, J; Williams, I; Gorvett, R

    2012-12-01

    Foot and mouth disease (FMD) continues to be a disease of major concern for the United States Department of Agriculture (USDA) and livestock industries. Foot and mouth disease virus is a high-consequence pathogen for the United States (USA). Live animal trade is a major risk factor for introduction of FMD into a country. This research estimates the probability of FMD being introduced into the USA via the legal importation of livestock. This probability is calculated by considering the potential introduction of FMD from each country from which the USA imports live animals. The total probability of introduction into the USA of FMD from imported livestock is estimated to be 0.415% per year, which is equivalent to one introduction every 241 years. In addition, to provide a basis for evaluating the significance of risk management techniques and expenditures, the sensitivity of the above result to changes in various risk parameter assumptions is determined.

  1. Multivariate localization methods for ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-05-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  2. A High-Speed, Real-Time Visualization and State Estimation Platform for Monitoring and Control of Electric Distribution Systems: Implementation and Field Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundstrom, Blake; Gotseff, Peter; Giraldez, Julieta

    Continued deployment of renewable and distributed energy resources is fundamentally changing the way that electric distribution systems are controlled and operated; more sophisticated active system control and greater situational awareness are needed. Real-time measurements and distribution system state estimation (DSSE) techniques enable more sophisticated system control and, when combined with visualization applications, greater situational awareness. This paper presents a novel demonstration of a high-speed, real-time DSSE platform and related control and visualization functionalities, implemented using existing open-source software and distribution system monitoring hardware. Live scrolling strip charts of meter data and intuitive annotated map visualizations of the entire state (obtainedmore » via DSSE) of a real-world distribution circuit are shown. The DSSE implementation is validated to demonstrate provision of accurate voltage data. This platform allows for enhanced control and situational awareness using only a minimum quantity of distribution system measurement units and modest data and software infrastructure.« less

  3. Representation and redistribution in federations.

    PubMed

    Dragu, Tiberiu; Rodden, Jonathan

    2011-05-24

    Many of the world's most populous democracies are political unions composed of states or provinces that are unequally represented in the national legislature. Scattered empirical studies, most of them focusing on the United States, have discovered that overrepresented states appear to receive larger shares of the national budget. Although this relationship is typically attributed to bargaining advantages associated with greater legislative representation, an important threat to empirical identification stems from the fact that the representation scheme was chosen by the provinces. Thus, it is possible that representation and fiscal transfers are both determined by other characteristics of the provinces in a specific country. To obtain an improved estimate of the relationship between representation and redistribution, we collect and analyze provincial-level data from nine federations over several decades, taking advantage of the historical process through which federations formed and expanded. Controlling for a variety of country- and province-level factors and using a variety of estimation techniques, we show that overrepresented provinces in political unions around the world are rather dramatically favored in the distribution of resources.

  4. Development of a preference-based index from the National Eye Institute Visual Function Questionnaire-25.

    PubMed

    Rentz, Anne M; Kowalski, Jonathan W; Walt, John G; Hays, Ron D; Brazier, John E; Yu, Ren; Lee, Paul; Bressler, Neil; Revicki, Dennis A

    2014-03-01

    Understanding how individuals value health states is central to patient-centered care and to health policy decision making. Generic preference-based measures of health may not effectively capture the impact of ocular diseases. Recently, 6 items from the National Eye Institute Visual Function Questionnaire-25 were used to develop the Visual Function Questionnaire-Utility Index health state classification, which defines visual function health states. To describe elicitation of preferences for health states generated from the Visual Function Questionnaire-Utility Index health state classification and development of an algorithm to estimate health preference scores for any health state. Nonintervention, cross-sectional study of the general community in 4 countries (Australia, Canada, United Kingdom, and United States). A total of 607 adult participants were recruited from local newspaper advertisements. In the United Kingdom, an existing database of participants from previous studies was used for recruitment. Eight of 15,625 possible health states from the Visual Function Questionnaire-Utility Index were valued using time trade-off technique. A θ severity score was calculated for Visual Function Questionnaire-Utility Index-defined health states using item response theory analysis. Regression models were then used to develop an algorithm to assign health state preference values for all potential health states defined by the Visual Function Questionnaire-Utility Index. Health state preference values for the 8 states ranged from a mean (SD) of 0.343 (0.395) to 0.956 (0.124). As expected, preference values declined with worsening visual function. Results indicate that the Visual Function Questionnaire-Utility Index describes states that participants view as spanning most of the continuum from full health to dead. Visual Function Questionnaire-Utility Index health state classification produces health preference scores that can be estimated in vision-related studies that include the National Eye Institute Visual Function Questionnaire-25. These preference scores may be of value for estimating utilities in economic and health policy analyses.

  5. A strategy for analysis of (molecular) equilibrium simulations: Configuration space density estimation, clustering, and visualization

    NASA Astrophysics Data System (ADS)

    Hamprecht, Fred A.; Peter, Christine; Daura, Xavier; Thiel, Walter; van Gunsteren, Wilfred F.

    2001-02-01

    We propose an approach for summarizing the output of long simulations of complex systems, affording a rapid overview and interpretation. First, multidimensional scaling techniques are used in conjunction with dimension reduction methods to obtain a low-dimensional representation of the configuration space explored by the system. A nonparametric estimate of the density of states in this subspace is then obtained using kernel methods. The free energy surface is calculated from that density, and the configurations produced in the simulation are then clustered according to the topography of that surface, such that all configurations belonging to one local free energy minimum form one class. This topographical cluster analysis is performed using basin spanning trees which we introduce as subgraphs of Delaunay triangulations. Free energy surfaces obtained in dimensions lower than four can be visualized directly using iso-contours and -surfaces. Basin spanning trees also afford a glimpse of higher-dimensional topographies. The procedure is illustrated using molecular dynamics simulations on the reversible folding of peptide analoga. Finally, we emphasize the intimate relation of density estimation techniques to modern enhanced sampling algorithms.

  6. Technical note: Simultaneous fully dynamic characterization of multiple input–output relationships in climate models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kravitz, Ben; MacMartin, Douglas G.; Rasch, Philip J.

    We introduce system identification techniques to climate science wherein multiple dynamic input–output relationships can be simultaneously characterized in a single simulation. This method, involving multiple small perturbations (in space and time) of an input field while monitoring output fields to quantify responses, allows for identification of different timescales of climate response to forcing without substantially pushing the climate far away from a steady state. We use this technique to determine the steady-state responses of low cloud fraction and latent heat flux to heating perturbations over 22 regions spanning Earth's oceans. We show that the response characteristics are similar to thosemore » of step-change simulations, but in this new method the responses for 22 regions can be characterized simultaneously. Moreover, we can estimate the timescale over which the steady-state response emerges. The proposed methodology could be useful for a wide variety of purposes in climate science, including characterization of teleconnections and uncertainty quantification to identify the effects of climate model tuning parameters.« less

  7. Technical note: Simultaneous fully dynamic characterization of multiple input–output relationships in climate models

    DOE PAGES

    Kravitz, Ben; MacMartin, Douglas G.; Rasch, Philip J.; ...

    2017-02-17

    We introduce system identification techniques to climate science wherein multiple dynamic input–output relationships can be simultaneously characterized in a single simulation. This method, involving multiple small perturbations (in space and time) of an input field while monitoring output fields to quantify responses, allows for identification of different timescales of climate response to forcing without substantially pushing the climate far away from a steady state. We use this technique to determine the steady-state responses of low cloud fraction and latent heat flux to heating perturbations over 22 regions spanning Earth's oceans. We show that the response characteristics are similar to thosemore » of step-change simulations, but in this new method the responses for 22 regions can be characterized simultaneously. Moreover, we can estimate the timescale over which the steady-state response emerges. The proposed methodology could be useful for a wide variety of purposes in climate science, including characterization of teleconnections and uncertainty quantification to identify the effects of climate model tuning parameters.« less

  8. Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems

    NASA Astrophysics Data System (ADS)

    Mahdi Alavi, S. M.; Saif, Mehrdad

    2013-12-01

    This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.

  9. Statistics based sampling for controller and estimator design

    NASA Astrophysics Data System (ADS)

    Tenne, Dirk

    The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.

  10. How accurately can we estimate energetic costs in a marine top predator, the king penguin?

    PubMed

    Halsey, Lewis G; Fahlman, Andreas; Handrich, Yves; Schmidt, Alexander; Woakes, Anthony J; Butler, Patrick J

    2007-01-01

    King penguins (Aptenodytes patagonicus) are one of the greatest consumers of marine resources. However, while their influence on the marine ecosystem is likely to be significant, only an accurate knowledge of their energy demands will indicate their true food requirements. Energy consumption has been estimated for many marine species using the heart rate-rate of oxygen consumption (f(H) - V(O2)) technique, and the technique has been applied successfully to answer eco-physiological questions. However, previous studies on the energetics of king penguins, based on developing or applying this technique, have raised a number of issues about the degree of validity of the technique for this species. These include the predictive validity of the present f(H) - V(O2) equations across different seasons and individuals and during different modes of locomotion. In many cases, these issues also apply to other species for which the f(H) - V(O2) technique has been applied. In the present study, the accuracy of three prediction equations for king penguins was investigated based on validity studies and on estimates of V(O2) from published, field f(H) data. The major conclusions from the present study are: (1) in contrast to that for walking, the f(H) - V(O2) relationship for swimming king penguins is not affected by body mass; (2) prediction equation (1), log(V(O2) = -0.279 + 1.24log(f(H) + 0.0237t - 0.0157log(f(H)t, derived in a previous study, is the most suitable equation presently available for estimating V(O2) in king penguins for all locomotory and nutritional states. A number of possible problems associated with producing an f(H) - V(O2) relationship are discussed in the present study. Finally, a statistical method to include easy-to-measure morphometric characteristics, which may improve the accuracy of f(H) - V(O2) prediction equations, is explained.

  11. Assimilation of thermospheric measurements for ionosphere-thermosphere state estimation

    NASA Astrophysics Data System (ADS)

    Miladinovich, Daniel S.; Datta-Barua, Seebany; Bust, Gary S.; Makela, Jonathan J.

    2016-12-01

    We develop a method that uses data assimilation to estimate ionospheric-thermospheric (IT) states during midlatitude nighttime storm conditions. The algorithm Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE) uses time-varying electron densities in the F region, derived primarily from total electron content data, to estimate two drivers of the IT: neutral winds and electric potential. A Kalman filter is used to update background models based on ingested plasma densities and neutral wind measurements. This is the first time a Kalman filtering technique is used with the EMPIRE algorithm and the first time neutral wind measurements from 630.0 nm Fabry-Perot interferometers (FPIs) are ingested to improve estimates of storm time ion drifts and neutral winds. The effects of assimilating remotely sensed neutral winds from FPI observations are studied by comparing results of ingesting: electron densities (N) only, N plus half the measurements from a single FPI, and then N plus all of the FPI data. While estimates of ion drifts and neutral winds based on N give estimates similar to the background models, this study's results show that ingestion of the FPI data can significantly change neutral wind and ion drift estimation away from background models. In particular, once neutral winds are ingested, estimated neutral winds agree more with validation wind data, and estimated ion drifts in the magnetic field-parallel direction are more sensitive to ingestion than the field-perpendicular zonal and meridional directions. Also, data assimilation with FPI measurements helps provide insight into the effects of contamination on 630.0 nm emissions experienced during geomagnetic storms.

  12. Health Auctions: a Valuation Experiment (HAVE) study protocol.

    PubMed

    Kularatna, Sanjeewa; Petrie, Dennis; Scuffham, Paul A; Byrnes, Joshua

    2016-04-07

    Quality-adjusted life years are derived using health state utility weights which adjust for the relative value of living in each health state compared with living in perfect health. Various techniques are used to estimate health state utility weights including time-trade-off and standard gamble. These methods have exhibited limitations in terms of complexity, validity and reliability. A new composite approach using experimental auctions to value health states is introduced in this protocol. A pilot study will test the feasibility and validity of using experimental auctions to value health states in monetary terms. A convenient sample (n=150) from a population of university staff and students will be invited to participate in 30 auction sets with a group of 5 people in each set. The 9 health states auctioned in each auction set will come from the commonly used EQ-5D-3L instrument. At most participants purchase 2 health states, and the participant who acquires the 2 'best' health states on average will keep the amount of money they do not spend in acquiring those health states. The value (highest bid and average bid) of each of the 24 health states will be compared across auctions to test for reliability across auction groups and across auctioneers. A test retest will be conducted for 10% of the sample to assess reliability of responses for health states auctions. Feasibility of conducting experimental auctions to value health states will also be examined. The validity of estimated health states values will be compared with published utility estimates from other methods. This pilot study will explore the feasibility, reliability and validity in using experimental auction for valuing health states. Ethical clearance was obtained from Griffith University ethics committee. The results will be disseminated in peer-reviewed journals and major international conferences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. Integrated detection, estimation, and guidance in pursuit of a maneuvering target

    NASA Astrophysics Data System (ADS)

    Dionne, Dany

    The thesis focuses on efficient solutions of non-cooperative pursuit-evasion games with imperfect information on the state of the system. This problem is important in the context of interception of future maneuverable ballistic missiles. However, the theoretical developments are expected to find application to a broad class of hybrid control and estimation problems in industry. The validity of the results is nevertheless confirmed using a benchmark problem in the area of terminal guidance. A specific interception scenario between an incoming target with no information and a single interceptor missile with noisy measurements is analyzed in the form of a linear hybrid system subject to additive abrupt changes. The general research is aimed to achieve improved homing accuracy by integrating ideas from detection theory, state estimation theory and guidance. The results achieved can be summarized as follows. (i) Two novel maneuver detectors are developed to diagnose abrupt changes in a class of hybrid systems (detection and isolation of evasive maneuvers): a new implementation of the GLR detector and the novel adaptive- H0 GLR detector. (ii) Two novel state estimators for target tracking are derived using the novel maneuver detectors. The state estimators employ parameterized family of functions to described possible evasive maneuvers. (iii) A novel adaptive Bayesian multiple model predictor of the ballistic miss is developed which employs semi-Markov models and ideas from detection theory. (iv) A novel integrated estimation and guidance scheme that significantly improves the homing accuracy is also presented. The integrated scheme employs banks of estimators and guidance laws, a maneuver detector, and an on-line governor; the scheme is adaptive with respect to the uncertainty affecting the probability density function of the filtered state. (v) A novel discretization technique for the family of continuous-time, game theoretic, bang-bang guidance laws is introduced. The performance of the novel algorithms is assessed for the scenario of a pursuit-evasion engagement between a randomly maneuvering ballistic missile and an interceptor. Extensive Monte Carlo simulations are employed to evaluate the main statistical properties of the algorithms. (Abstract shortened by UMI.)

  14. A Novel Continuous Blood Pressure Estimation Approach Based on Data Mining Techniques.

    PubMed

    Miao, Fen; Fu, Nan; Zhang, Yuan-Ting; Ding, Xiao-Rong; Hong, Xi; He, Qingyun; Li, Ye

    2017-11-01

    Continuous blood pressure (BP) estimation using pulse transit time (PTT) is a promising method for unobtrusive BP measurement. However, the accuracy of this approach must be improved for it to be viable for a wide range of applications. This study proposes a novel continuous BP estimation approach that combines data mining techniques with a traditional mechanism-driven model. First, 14 features derived from simultaneous electrocardiogram and photoplethysmogram signals were extracted for beat-to-beat BP estimation. A genetic algorithm-based feature selection method was then used to select BP indicators for each subject. Multivariate linear regression and support vector regression were employed to develop the BP model. The accuracy and robustness of the proposed approach were validated for static, dynamic, and follow-up performance. Experimental results based on 73 subjects showed that the proposed approach exhibited excellent accuracy in static BP estimation, with a correlation coefficient and mean error of 0.852 and -0.001 ± 3.102 mmHg for systolic BP, and 0.790 and -0.004 ± 2.199 mmHg for diastolic BP. Similar performance was observed for dynamic BP estimation. The robustness results indicated that the estimation accuracy was lower by a certain degree one day after model construction but was relatively stable from one day to six months after construction. The proposed approach is superior to the state-of-the-art PTT-based model for an approximately 2-mmHg reduction in the standard derivation at different time intervals, thus providing potentially novel insights for cuffless BP estimation.

  15. Monitoring temperatures in coal conversion and combustion processes via ultrasound

    NASA Astrophysics Data System (ADS)

    Gopalsami, N.; Raptis, A. C.; Mulcahey, T. P.

    1980-02-01

    The state of the art of instrumentation for monitoring temperatures in coal conversion and combustion systems is examined. The instrumentation types studied include thermocouples, radiation pyrometers, and acoustical thermometers. The capabilities and limitations of each type are reviewed. A feasibility study of the ultrasonic thermometry is described. A mathematical model of a pulse-echo ultrasonic temperature measurement system is developed using linear system theory. The mathematical model lends itself to the adaptation of generalized correlation techniques for the estimation of propagation delays. Computer simulations are made to test the efficacy of the signal processing techniques for noise-free as well as noisy signals. Based on the theoretical study, acoustic techniques to measure temperature in reactors and combustors are feasible.

  16. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    NASA Astrophysics Data System (ADS)

    Perez, Hector Eduardo

    This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the notion of interval observers to PDE models using a sensitivity-based approach. Practically, this chapter quantifies the sensitivity of battery state estimates to parameter variations, enabling robust battery management schemes. The effectiveness of the proposed sensitivity-based interval observers is verified via a numerical study for the range of uncertain parameters. Chapter 4: This chapter seeks to derive insight on battery charging control using electrochemistry models. Directly using full order complex multi-partial differential equation (PDE) electrochemical battery models is difficult and sometimes impossible to implement. This chapter develops an approach for obtaining optimal charge control schemes, while ensuring safety through constraint satisfaction. An optimal charge control problem is mathematically formulated via a coupled reduced order electrochemical-thermal model which conserves key electrochemical and thermal state information. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting nonlinear multi-state optimal control problem. Minimum time charge protocols are analyzed in detail subject to solid and electrolyte phase concentration constraints, as well as temperature constraints. The optimization scheme is examined using different input current bounds, and an insight on battery design for fast charging is provided. Experimental results are provided to compare the tradeoffs between an electrochemical-thermal model based optimal charge protocol and a traditional charge protocol. Chapter 5: Fast and safe charging protocols are crucial for enhancing the practicality of batteries, especially for mobile applications such as smartphones and electric vehicles. This chapter proposes an innovative approach to devising optimally health-conscious fast-safe charge protocols. A multi-objective optimal control problem is mathematically formulated via a coupled electro-thermal-aging battery model, where electrical and aging sub-models depend upon the core temperature captured by a two-state thermal sub-model. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting highly nonlinear six-state optimal control problem. Charge time and health degradation are therefore optimally traded off, subject to both electrical and thermal constraints. Minimum-time, minimum-aging, and balanced charge scenarios are examined in detail. Sensitivities to the upper voltage bound, ambient temperature, and cooling convection resistance are investigated as well. Experimental results are provided to compare the tradeoffs between a balanced and traditional charge protocol. Chapter 6: This chapter provides concluding remarks on the findings of this dissertation and a discussion of future work.

  17. Flood Nowcasting With Linear Catchment Models, Radar and Kalman Filters

    NASA Astrophysics Data System (ADS)

    Pegram, Geoff; Sinclair, Scott

    A pilot study using real time rainfall data as input to a parsimonious linear distributed flood forecasting model is presented. The aim of the study is to deliver an operational system capable of producing flood forecasts, in real time, for the Mgeni and Mlazi catchments near the city of Durban in South Africa. The forecasts can be made at time steps which are of the order of a fraction of the catchment response time. To this end, the model is formulated in Finite Difference form in an equation similar to an Auto Regressive Moving Average (ARMA) model; it is this formulation which provides the required computational efficiency. The ARMA equation is a discretely coincident form of the State-Space equations that govern the response of an arrangement of linear reservoirs. This results in a functional relationship between the reservoir response con- stants and the ARMA coefficients, which guarantees stationarity of the ARMA model. Input to the model is a combined "Best Estimate" spatial rainfall field, derived from a combination of weather RADAR and Satellite rainfield estimates with point rain- fall given by a network of telemetering raingauges. Several strategies are employed to overcome the uncertainties associated with forecasting. Principle among these are the use of optimal (double Kalman) filtering techniques to update the model states and parameters in response to current streamflow observations and the application of short term forecasting techniques to provide future estimates of the rainfield as input to the model.

  18. Treatment of systematic errors in land data assimilation systems

    NASA Astrophysics Data System (ADS)

    Crow, W. T.; Yilmaz, M.

    2012-12-01

    Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.

  19. Using estimates of natural variation to detect ecologically important change in forest spatial patterns: a case study, Cascade Range, eastern Washington.

    Treesearch

    Paul F. Hessburg; Bradley G. Smith; R. Brion Salter

    1999-01-01

    Using hierarchical clustering techniques, we grouped subwatersheds on the eastern slope of the Cascade Range in Washington State into ecological subregions by similarity of area in potential vegetation and climate attributes. We then built spatially continuous historical and current vegetation maps for 48 randomly selected subwatersheds from interpretations of 1938-49...

  20. National land cover monitoring using large, permanent photo plots

    Treesearch

    Raymond L. Czaplewski; Glenn P. Catts; Paul W. Snook

    1987-01-01

    A study in the State of North Carplina, U.S.A. demonstrated that large, permanent photo plots (400 hectares) can be used to monitor large regions of land by using remote sensing techniques. Estimates of area in a variety of land cover categories were made by photointerpretation of medium-scale aerial photography from a single month using 111 photo plots. Many of these...

  1. Techniques of Water-Resources Investigations of the United States Geological Survey. Book 5, Laboratory Analysis. Chapter A5, Methods for Determination of Radioactive Substances in Water and Fluvial Sediments.

    ERIC Educational Resources Information Center

    Thatcher, L. L.; And Others

    Analytical methods for determining important components of fission and natural radioactivity found in water are reported. The discussion of each method includes conditions for application of the method, a summary of the method, interferences, required apparatus, procedures, calculations and estimation of precision. Isotopes considered are…

  2. Alpha-canonical form representation of the open loop dynamics of the Space Shuttle main engine

    NASA Technical Reports Server (NTRS)

    Duyar, Almet; Eldem, Vasfi; Merrill, Walter C.; Guo, Ten-Huei

    1991-01-01

    A parameter and structure estimation technique for multivariable systems is used to obtain a state space representation of open loop dynamics of the space shuttle main engine in alpha-canonical form. The parameterization being used is both minimal and unique. The simplified linear model may be used for fault detection studies and control system design and development.

  3. Future forest carbon accounting challenges: the question of regionalization

    Treesearch

    Michael C. Nichols

    2015-01-01

    Forest carbon accounting techniques are changing. This year, a new accounting system is making its debut with the production of forest carbon data for EPA’s National Greenhouse Gas Inventory. The Forest Service’s annualized inventory system is being more fully integrated into estimates of forest carbon at the national and state levels both for the present and the...

  4. A Minimum Fuel Based Estimator for Maneuver and Natrual Dynamics Reconstruction

    NASA Astrophysics Data System (ADS)

    Lubey, D.; Scheeres, D.

    2013-09-01

    The vast and growing population of objects in Earth orbit (active and defunct spacecraft, orbital debris, etc.) offers many unique challenges when it comes to tracking these objects and associating the resulting observations. Complicating these challenges are the inaccurate natural dynamical models of these objects, the active maneuvers of spacecraft that deviate them from their ballistic trajectories, and the fact that spacecraft are tracked and operated by separate agencies. Maneuver detection and reconstruction algorithms can help with each of these issues by estimating mismodeled and unmodeled dynamics through indirect observation of spacecraft. It also helps to verify the associations made by an object correlation algorithm or aid in making those associations, which is essential when tracking objects in orbit. The algorithm developed in this study applies an Optimal Control Problem (OCP) Distance Metric approach to the problems of Maneuver Reconstruction and Dynamics Estimation. This was first developed by Holzinger, Scheeres, and Alfriend (2011), with a subsequent study by Singh, Horwood, and Poore (2012). This method estimates the minimum fuel control policy rather than the state as a typical Kalman Filter would. This difference ensures that the states are connected through a given dynamical model and allows for automatic covariance manipulation, which can help to prevent filter saturation. Using a string of measurements (either verified or hypothesized to correlate with one another), the algorithm outputs a corresponding string of adjoint and state estimates with associated noise. Post-processing techniques are implemented, which when applied to the adjoint estimates can remove noise and expose unmodeled maneuvers and mismodeled natural dynamics. Specifically, the estimated controls are used to determine spacecraft dependent accelerations (atmospheric drag and solar radiation pressure) using an adapted form of the Optimal Control based natural dynamics estimation scheme developed by Lubey and Scheeres (2012). In order to allow for direct comparison, the estimator developed here was modeled after a typical Kalman Filter. The estimator forces the terminal state to lie on a manifold that satisfies the least squares with a priori information cost function, thus establishing a link with a typical Kalman filter. Terms are collected into a pseudo-Kalman Gain, which creates an equivalent form in the state estimates and covariances between the two estimators. While the two estimators share common roots, the inclusion of control in the Minimum Fuel Estimator gives it special properties. For instance, the inclusion of adjoint noise can help to automatically prevent filter saturation in a manner similar to a State Noise Compensation Algorithm. This property is quite important when considering dynamics mismodeling as filter saturation will cause estimate divergence for mismodeled systems. Additional properties and alternative forms of the estimator are also explored in this study. Several implementations of this estimator are given in this paper. It is applied to LEO, GEO, and GTO orbits with drag and SRP mismodeling. The inclusion of unmodeled maneuvers is also considered. These numerical simulations verify the mathematical properties of this estimator, and demonstrate the advantages that this estimator has over typical Kalman Filters.

  5. Application of low-dimensional techniques for closed-loop control of turbulent flows

    NASA Astrophysics Data System (ADS)

    Ausseur, Julie

    The groundwork for an advanced closed-loop control of separated shear layer flows is laid out in this document. The experimental testbed for the present investigation is the turbulent flow over a NACA-4412 model airfoil tested in the Syracuse University subsonic wind tunnel at Re=135,000. The specified control objective is to delay separation - or stall - by constantly keeping the flow attached to the surface of the wing. The proper orthogonal decomposition (POD) is shown to he a valuable tool to provide a low-dimensional estimate of the flow state and the first POD expansion coefficient is proposed to he used as the control variable. Other reduced-order techniques such as the modified linear and quadratic stochastic measurement methods (mLSM, mQSM) are applied to reduce the complexity of the flow field and their ability to accurately estimate the flow state from surface pressure measurements alone is examined. A simple proportional feedback control is successfully implemented in real-time using these tools and flow separation is efficiently delayed by over 3 degrees angle of attack. To further improve the quality of the flow state estimate, the implementation of a Kalman filter is foreseen, in which the knowledge of the flow dynamics is added to the computation of the control variable to correct for the potential measurement errors. To this aim, a reduced-order model (ROM) of the flow is developed using the least-squares method to obtain the coefficients of the POD/Galerkin projection of the Navier-Stokes equations from experimental data. To build the training ensemble needed in this experimental procedure, the spectral mLSM is performed to generate time-resolved series of POD expansion coefficients from which temporal derivatives are computed. This technique, which is applied to independent PIV velocity snapshots and time-resolved surface measurements, is able to retrieve the rational temporal evolution of the flow physics in the entire 2-D measurement area. The quality of the spectral measurements is confirmed by the results from both the linear and quadratic dynamical systems. The preliminary results from the linear ROM strengthens the motivation for future control implementation of a linear Kalman filter in this flow.

  6. Some photophysical properties of new oligomer obtained from anodic oxidation of 4,4‧-dimethoxychalcone

    NASA Astrophysics Data System (ADS)

    Ghomrasni, S.; Aribi, I.; Chemek, M.; Said, A. Haj; Alimi, K.

    2018-04-01

    Some photopysical properties of a new oligomer obtained from the anodic oxidation of the 4,4‧-dimethoxy-chalcone were investigated using different and complementary techniques. Firstly, TGA analysis and X-Ray diffraction experiments showed that the oligomer is thermally stable up to 500 K and partially organized at the solid state, respectively. Secondly, the optical properties of the oligomer were studied in solution and in the solid state. The optical band gap was estimated to be 3.17 eV in solution state and 2.70 eV in film state. What's more, the fluorescence decay is determined showing a considerably faster in the film state (0.183 ns) than in solution state (1.606 ns), due to the rapid non-radiative decay at inter-chain trap sites.

  7. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    NASA Astrophysics Data System (ADS)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  8. Probabilistic Photometric Redshifts in the Era of Petascale Astronomy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrasco Kind, Matias

    2014-01-01

    With the growth of large photometric surveys, accurately estimating photometric redshifts, preferably as a probability density function (PDF), and fully understanding the implicit systematic uncertainties in this process has become increasingly important. These surveys are expected to obtain images of billions of distinct galaxies. As a result, storing and analyzing all of these photometric redshift PDFs will be non-trivial, and this challenge becomes even more severe if a survey plans to compute and store multiple different PDFs. In this thesis, we have developed an end-to-end framework that will compute accurate and robust photometric redshift PDFs for massive data sets bymore » using two new, state-of-the-art machine learning techniques that are based on a random forest and a random atlas, respectively. By using data from several photometric surveys, we demonstrate the applicability of these new techniques, and we demonstrate that our new approach is among the best techniques currently available. We also show how different techniques can be combined by using novel Bayesian techniques to improve the photometric redshift precision to unprecedented levels while also presenting new approaches to better identify outliers. In addition, our framework provides supplementary information regarding the data being analyzed, including unbiased estimates of the accuracy of the technique without resorting to a validation data set, identification of poor photometric redshift areas within the parameter space occupied by the spectroscopic training data, and a quantification of the relative importance of the variables used during the estimation process. Furthermore, we present a new approach to represent and store photometric redshift PDFs by using a sparse representation with outstanding compression and reconstruction capabilities. We also demonstrate how this framework can also be directly incorporated into cosmological analyses. The new techniques presented in this thesis are crucial to enable the development of precision cosmology in the era of petascale astronomical surveys.« less

  9. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  10. Designing hydrologic monitoring networks to maximize predictability of hydrologic conditions in a data assimilation system: a case study from South Florida, U.S.A

    NASA Astrophysics Data System (ADS)

    Flores, A. N.; Pathak, C. S.; Senarath, S. U.; Bras, R. L.

    2009-12-01

    Robust hydrologic monitoring networks represent a critical element of decision support systems for effective water resource planning and management. Moreover, process representation within hydrologic simulation models is steadily improving, while at the same time computational costs are decreasing due to, for instance, readily available high performance computing resources. The ability to leverage these increasingly complex models together with the data from these monitoring networks to provide accurate and timely estimates of relevant hydrologic variables within a multiple-use, managed water resources system would substantially enhance the information available to resource decision makers. Numerical data assimilation techniques provide mathematical frameworks through which uncertain model predictions can be constrained to observational data to compensate for uncertainties in the model forcings and parameters. In ensemble-based data assimilation techniques such as the ensemble Kalman Filter (EnKF), information in observed variables such as canal, marsh and groundwater stages are propagated back to the model states in a manner related to: (1) the degree of certainty in the model state estimates and observations, and (2) the cross-correlation between the model states and the observable outputs of the model. However, the ultimate degree to which hydrologic conditions can be accurately predicted in an area of interest is controlled, in part, by the configuration of the monitoring network itself. In this proof-of-concept study we developed an approach by which the design of an existing hydrologic monitoring network is adapted to iteratively improve the predictions of hydrologic conditions within an area of the South Florida Water Management District (SFWMD). The objective of the network design is to minimize prediction errors of key hydrologic states and fluxes produced by the spatially distributed Regional Simulation Model (RSM), developed specifically to simulate the hydrologic conditions in several intensively managed and hydrologically complex watersheds within the SFWMD system. In a series of synthetic experiments RSM is used to generate the notionally true hydrologic state and the relevant observational data. The EnKF is then used as the mechanism to fuse RSM hydrologic estimates with data from the candidate network. The performance of the candidate network is measured by the prediction errors of the EnKF estimates of hydrologic states, relative to the notionally true scenario. The candidate network is then adapted by relocating existing observational sites to unobserved areas where predictions of local hydrologic conditions are most uncertain and the EnKF procedure repeated. Iteration of the monitoring network continues until further improvements in EnKF-based predictions of hydrologic conditions are negligible.

  11. A comparison of two above-ground biomass estimation techniques integrating satellite-based remotely sensed data and ground data for tropical and semiarid forests in Puerto Rico

    NASA Astrophysics Data System (ADS)

    Iiames, J. S.; Riegel, J.; Lunetta, R.

    2013-12-01

    Two above-ground forest biomass estimation techniques were evaluated for the United States Territory of Puerto Rico using predictor variables acquired from satellite based remotely sensed data and ground data from the U.S. Department of Agriculture Forest Inventory Analysis (FIA) program. The U.S. Environmental Protection Agency (EPA) estimated above-ground forest biomass implementing methodology first posited by the Woods Hole Research Center developed for conterminous United States (National Biomass and Carbon Dataset [NBCD2000]). For EPA's effort, spatial predictor layers for above-ground biomass estimation included derived products from the U.S. Geologic Survey (USGS) National Land Cover Dataset 2001 (NLCD) (landcover and canopy density), the USGS Gap Analysis Program (forest type classification), the USGS National Elevation Dataset, and the NASA Shuttle Radar Topography Mission (tree heights). In contrast, the U.S. Forest Service (USFS) biomass product integrated FIA ground-based data with a suite of geospatial predictor variables including: (1) the Moderate Resolution Imaging Spectrometer (MODIS)-derived image composites and percent tree cover; (2) NLCD land cover proportions; (3) topographic variables; (4) monthly and annual climate parameters; and (5) other ancillary variables. Correlations between both data sets were made at variable watershed scales to test level of agreement. Notice: This work is done in support of EPA's Sustainable Healthy Communities Research Program. The U.S EPA funded and conducted the research described in this paper. Although this work was reviewed by the EPA and has been approved for publication, it may not necessarily reflect official Agency policy. Mention of any trade names or commercial products does not constitute endorsement or recommendation for use.

  12. Estimation of critical behavior from the density of states in classical statistical models

    NASA Astrophysics Data System (ADS)

    Malakis, A.; Peratzakis, A.; Fytas, N. G.

    2004-12-01

    We present a simple and efficient approximation scheme which greatly facilitates the extension of Wang-Landau sampling (or similar techniques) in large systems for the estimation of critical behavior. The method, presented in an algorithmic approach, is based on a very simple idea, familiar in statistical mechanics from the notion of thermodynamic equivalence of ensembles and the central limit theorem. It is illustrated that we can predict with high accuracy the critical part of the energy space and by using this restricted part we can extend our simulations to larger systems and improve the accuracy of critical parameters. It is proposed that the extensions of the finite-size critical part of the energy space, determining the specific heat, satisfy a scaling law involving the thermal critical exponent. The method is applied successfully for the estimation of the scaling behavior of specific heat of both square and simple cubic Ising lattices. The proposed scaling law is verified by estimating the thermal critical exponent from the finite-size behavior of the critical part of the energy space. The density of states of the zero-field Ising model on these lattices is obtained via a multirange Wang-Landau sampling.

  13. Inverse estimation of parameters for an estuarine eutrophication model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, J.; Kuo, A.Y.

    1996-11-01

    An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulationsmore » with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.« less

  14. Ab Initio Studies of Shock-Induced Chemical Reactions of Inter-Metallics

    NASA Astrophysics Data System (ADS)

    Zaharieva, Roussislava; Hanagud, Sathya

    2009-06-01

    Shock-induced and shock assisted chemical reactions of intermetallic mixtures are studied by many researchers, using both experimental and theoretical techniques. The theoretical studies are primarily at continuum scales. The model frameworks include mixture theories and meso-scale models of grains of porous mixtures. The reaction models vary from equilibrium thermodynamic model to several non-equilibrium thermodynamic models. The shock-effects are primarily studied using appropriate conservation equations and numerical techniques to integrate the equations. All these models require material constants from experiments and estimates of transition states. Thus, the objective of this paper is to present studies based on ab initio techniques. The ab inito studies, to date, use ab inito molecular dynamics. This paper presents a study that uses shock pressures, and associated temperatures as starting variables. Then intermetallic mixtures are modeled as slabs. The required shock stresses are created by straining the lattice. Then, ab initio binding energy calculations are used to examine the stability of the reactions. Binding energies are obtained for different strain components super imposed on uniform compression and finite temperatures. Then, vibrational frequencies and nudge elastic band techniques are used to study reactivity and transition states. Examples include Ni and Al.

  15. Using Smartphone Sensors for Improving Energy Expenditure Estimation

    PubMed Central

    Zhu, Jindan; Das, Aveek K.; Zeng, Yunze; Mohapatra, Prasant; Han, Jay J.

    2015-01-01

    Energy expenditure (EE) estimation is an important factor in tracking personal activity and preventing chronic diseases, such as obesity and diabetes. Accurate and real-time EE estimation utilizing small wearable sensors is a difficult task, primarily because the most existing schemes work offline or use heuristics. In this paper, we focus on accurate EE estimation for tracking ambulatory activities (walking, standing, climbing upstairs, or downstairs) of a typical smartphone user. We used built-in smartphone sensors (accelerometer and barometer sensor), sampled at low frequency, to accurately estimate EE. Using a barometer sensor, in addition to an accelerometer sensor, greatly increases the accuracy of EE estimation. Using bagged regression trees, a machine learning technique, we developed a generic regression model for EE estimation that yields upto 96% correlation with actual EE. We compare our results against the state-of-the-art calorimetry equations and consumer electronics devices (Fitbit and Nike+ FuelBand). The newly developed EE estimation algorithm demonstrated superior accuracy compared with currently available methods. The results were calibrated against COSMED K4b2 calorimeter readings. PMID:27170901

  16. Using Smartphone Sensors for Improving Energy Expenditure Estimation.

    PubMed

    Pande, Amit; Zhu, Jindan; Das, Aveek K; Zeng, Yunze; Mohapatra, Prasant; Han, Jay J

    2015-01-01

    Energy expenditure (EE) estimation is an important factor in tracking personal activity and preventing chronic diseases, such as obesity and diabetes. Accurate and real-time EE estimation utilizing small wearable sensors is a difficult task, primarily because the most existing schemes work offline or use heuristics. In this paper, we focus on accurate EE estimation for tracking ambulatory activities (walking, standing, climbing upstairs, or downstairs) of a typical smartphone user. We used built-in smartphone sensors (accelerometer and barometer sensor), sampled at low frequency, to accurately estimate EE. Using a barometer sensor, in addition to an accelerometer sensor, greatly increases the accuracy of EE estimation. Using bagged regression trees, a machine learning technique, we developed a generic regression model for EE estimation that yields upto 96% correlation with actual EE. We compare our results against the state-of-the-art calorimetry equations and consumer electronics devices (Fitbit and Nike+ FuelBand). The newly developed EE estimation algorithm demonstrated superior accuracy compared with currently available methods. The results were calibrated against COSMED K4b2 calorimeter readings.

  17. [Fractal characteristics of the functional state of the brain in patients with anxious phobic disorders].

    PubMed

    Dik, O E; Sviatogor, I A; Ishinova, V A; Nozdrachev, A D

    2012-01-01

    The task of estimation of the functional state of the human brain during psychotherapeutic treatment of psychogenic pain in patients with anxious phobic disorders is examined. For solving the task the methods of spectral and multifractal analyses of EEG fragments are applied during the perception of psychogenic pain and its removal by the psychorelaxation technique. Contrary to power spectra singularity spectra allow to distinguish EEGs quanitatively in the examined functional states of the human brain. The pain suppression in patients with anxious phobic disorders during psychorelaxation is accompanied by changing the width of the singularity spectrum and approximation of this multifractal partameter to the value corresponding to a healthy subject.

  18. Fabrication of Defect-Free Ferroelectric Liquid Crystal Displays Using Photoalignment and Their Electrooptic Performance

    NASA Astrophysics Data System (ADS)

    Kurihara, Ryuji; Furue, Hirokazu; Takahashi, Taiju; Yamashita, Tomo-o; Xu, Jun; Kobayashi, Shunsuke

    2001-07-01

    A photoalignment technique has been utilized for fabricating zigzag-defect-free ferroelectric liquid crystal displays (FLCDs) using polyimide RN-1199, -1286, -1266 (Nissan Chem. Ind.) and adopting oblique irradiation of unpolarized UV light. A rubbing technique was also utilized for comparison. It is shown that among these polyimide materials, RN-1199 is the best for fabricating defect-free cells with C-1 uniform states, but RN-1286 requires low energy to produce a photoaligned FLC phase. We have conducted an analytical investigation to clarify the conditions for obtaining zigzag-defect-free C-1 states, and it is theoretically shown that zigzag-defect-free C-1 state is obtained using a low azimuthal anchoring energy at a low pretilt angle, while a zigzag-defect-free C-2 state is obtained by increasing azimuthal anchoring energy above a critical value, also at a low pretilt angle. The estimated critical value of the azimuthal anchoring energy at which a transition from the C-1 state to the C-2 state occurs is 3×10-6 J/m2 for the FLC material FELIX M4654/100 (Clariant) used in this research; this value is shown to fall in a favorable range which is measured in an independent experiment.

  19. Estimating population parameters for northern and southern breeding populations of Canada geese

    USGS Publications Warehouse

    Hestbeck, J.B.; Rusch, Donald H.; Samuel, Michael D.; Humburg, Dale D.; Sullivan, Brian D.

    1998-01-01

    Canada geese (Branta canadensis) have been managed largely as a migratory resource. In the 1960's, Canada goose flocks were restored to historic breeding ranges in the United States and southern Canada to enhance recreational opportunity for observation and harvest. These populations of southern breeding geese have rapidly expanded, increasing conflicts with social and economic interests and causing the Midwinter Waterfowl Survey to be less effective as a management tool to monitor migrant populations. Wildlife agencies need methods to control local, southern breeding geese that reduce conflicts while providing adequate protection to populations of northern breeding geese. New techniques have been developed using mark-resight data from neck-banded geese to estimate distribution and population size during the late summer, fall, and mid-winter. Survival and movement rates can be estimated over special early or late hunting seasons, traditional fall-winter hunting season, and nonharvest periods. Direct recovery rates can be estimated for special and traditional harvest periods and these recovery rates can be related to survival and movement rates. Changes in harvest regulations can be related to changes in recovery, survival, and movement rates for specific cohorts of Canada geese. These techniques can be used to monitor population status and determine more appropriate harvest strategies.

  20. Transient high frequency signal estimation: A model-based processing approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, F.L.

    1985-03-22

    By utilizing the superposition property of linear systems a method of estimating the incident signal from reflective nondispersive data is developed. One of the basic merits of this approach is that, the reflections were removed by direct application of a Weiner type estimation algorithm, after the appropriate input was synthesized. The structure of the nondispersive signal model is well documented, and thus its' credence is established. The model is stated and more effort is devoted to practical methods of estimating the model parameters. Though a general approach was developed for obtaining the reflection weights, a simpler approach was employed here,more » since a fairly good reflection model is available. The technique essentially consists of calculating ratios of the autocorrelation function at lag zero and that lag where the incident and first reflection coincide. We initially performed our processing procedure on a measurement of a single signal. Multiple application of the processing procedure was required when we applied the reflection removal technique on a measurement containing information from the interaction of two physical phenomena. All processing was performed using SIG, an interactive signal processing package. One of the many consequences of using SIG was that repetitive operations were, for the most part, automated. A custom menu was designed to perform the deconvolution process.« less

  1. Age estimation from dental cementum incremental lines and periodontal disease.

    PubMed

    Dias, P E M; Beaini, T L; Melani, R F H

    2010-12-01

    Age estimation by counting incremental lines in cementum added to the average age of tooth eruption is considered an accurate method by some authors, while others reject it stating weak correlation between estimated and actual age. The aim of this study was to evaluate this technique and check the influence of periodontal disease on age estimates by analyzing both the number of cementum lines and the correlation between cementum thickness and actual age on freshly extracted teeth. Thirty one undecalcified ground cross sections of approximately 30 µm, from 25 teeth were prepared, observed, photographed and measured. Images were enhanced by software and counts were made by one observer, and the results compared with two control-observers. There was moderate correlation ((r)=0.58) for the entire sample, with mean error of 9.7 years. For teeth with periodontal pathologies, correlation was 0.03 with a mean error of 22.6 years. For teeth without periodontal pathologies, correlation was 0.74 with mean error of 1.6 years. There was correlation of 0.69 between cementum thickness and known age for the entire sample, 0.25 for teeth with periodontal problems and 0.75 for teeth without periodontal pathologies. The technique was reliable for periodontally sound teeth, but not for periodontally diseased teeth.

  2. A prototype upper-atmospheric data assimilation scheme based on optimal interpolation: 2. Numerical experiments

    NASA Astrophysics Data System (ADS)

    Akmaev, R. a.

    1999-04-01

    In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).

  3. Statistical Symbolic Execution with Informed Sampling

    NASA Technical Reports Server (NTRS)

    Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco

    2014-01-01

    Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

  4. Conservation Physiology of an Uncatchable Animal: The North Atlantic Right Whale (Eubalaena glacialis).

    PubMed

    Hunt, Kathleen E; Rolland, Rosalind M; Kraus, Scott D

    2015-10-01

    The North Atlantic right whale, Eubalaena glacialis (NARW), a critically endangered species that has been under intensive study for nearly four decades, provides an excellent case study for applying modern methods of conservation physiology to large whales. By combining long-term sighting histories of known individuals with physiological data from newer techniques (e.g., body condition estimated from photographs; endocrine status derived from fecal samples), physiological state and levels of stress can be estimated despite the lack of any method for nonlethal capture of large whales. Since traditional techniques for validating blood assays cannot be used in large whales, assays of fecal hormones have been validated using information on age, sex, and reproductive state derived from an extensive NARW photo-identification catalog. Using this approach, fecal glucocorticoids have been found to vary dramatically with reproductive state. It is therefore essential that glucocorticoid data be interpreted in conjunction with reproductive data. A case study correlating glucocorticoids with chronic noise is presented as an example. Keys to a successful research program for this uncatchable species have included: consistent population monitoring over decades, data-sharing across institutions, an extensive photo-identification catalog that documents individual histories, and consistent efforts at noninvasive collection of samples over years. Future research will require flexibility to adjust to changing distributions of populations. © The Author 2015. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  5. A regional high-resolution carbon flux inversion of North America for 2004

    NASA Astrophysics Data System (ADS)

    Schuh, A. E.; Denning, A. S.; Corbin, K. D.; Baker, I. T.; Uliasz, M.; Parazoo, N.; Andrews, A. E.; Worthy, D. E. J.

    2010-05-01

    Resolving the discrepancies between NEE estimates based upon (1) ground studies and (2) atmospheric inversion results, demands increasingly sophisticated techniques. In this paper we present a high-resolution inversion based upon a regional meteorology model (RAMS) and an underlying biosphere (SiB3) model, both running on an identical 40 km grid over most of North America. Current operational systems like CarbonTracker as well as many previous global inversions including the Transcom suite of inversions have utilized inversion regions formed by collapsing biome-similar grid cells into larger aggregated regions. An extreme example of this might be where corrections to NEE imposed on forested regions on the east coast of the United States might be the same as that imposed on forests on the west coast of the United States while, in reality, there likely exist subtle differences in the two areas, both natural and anthropogenic. Our current inversion framework utilizes a combination of previously employed inversion techniques while allowing carbon flux corrections to be biome independent. Temporally and spatially high-resolution results utilizing biome-independent corrections provide insight into carbon dynamics in North America. In particular, we analyze hourly CO2 mixing ratio data from a sparse network of eight towers in North America for 2004. A prior estimate of carbon fluxes due to Gross Primary Productivity (GPP) and Ecosystem Respiration (ER) is constructed from the SiB3 biosphere model on a 40 km grid. A combination of transport from the RAMS and the Parameterized Chemical Transport Model (PCTM) models is used to forge a connection between upwind biosphere fluxes and downwind observed CO2 mixing ratio data. A Kalman filter procedure is used to estimate weekly corrections to biosphere fluxes based upon observed CO2. RMSE-weighted annual NEE estimates, over an ensemble of potential inversion parameter sets, show a mean estimate 0.57 Pg/yr sink in North America. We perform the inversion with two independently derived boundary inflow conditions and calculate jackknife-based statistics to test the robustness of the model results. We then compare final results to estimates obtained from the CarbonTracker inversion system and at the Southern Great Plains flux site. Results are promising, showing the ability to correct carbon fluxes from the biosphere models over annual and seasonal time scales, as well as over the different GPP and ER components. Additionally, the correlation of an estimated sink of carbon in the South Central United States with regional anomalously high precipitation in an area of managed agricultural and forest lands provides interesting hypotheses for future work.

  6. Single-Frequency GPS Relative Navigation in a High Ionosphere Orbital Environment

    NASA Technical Reports Server (NTRS)

    Conrad, Patrick R.; Naasz, Bo J.

    2007-01-01

    The Global Positioning System (GPS) provides a convenient source for space vehicle relative navigation measurements, especially for low Earth orbit formation flying and autonomous rendezvous mission concepts. For single-frequency GPS receivers, ionospheric path delay can be a significant error source if not properly mitigated. In particular, ionospheric effects are known to cause significant radial position error bias and add dramatically to relative state estimation error if the onboard navigation software does not force the use of measurements from common or shared GPS space vehicles. Results from GPS navigation simulations are presented for a pair of space vehicles flying in formation and using GPS pseudorange measurements to perform absolute and relative orbit determination. With careful measurement selection techniques relative state estimation accuracy to less than 20 cm with standard GPS pseudorange processing and less than 10 cm with single-differenced pseudorange processing is shown.

  7. Quantitative Tomography for Continuous Variable Quantum Systems

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    2018-03-01

    We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.

  8. Auditory steady state response in sound field.

    PubMed

    Hernández-Pérez, H; Torres-Fortuny, A

    2013-02-01

    Physiological and behavioral responses were compared in normal-hearing subjects via analyses of the auditory steady-state response (ASSR) and conventional audiometry under sound field conditions. The auditory stimuli, presented through a loudspeaker, consisted of four carrier tones (500, 1000, 2000, and 4000 Hz), presented singly for behavioral testing but combined (multiple frequency technique), to estimate thresholds using the ASSR. Twenty normal-hearing adults were examined. The average differences between the physiological and behavioral thresholds were between 17 and 22 dB HL. The Spearman rank correlation between ASSR and behavioral thresholds was significant for all frequencies (p < 0.05). Significant differences were found in the ASSR amplitude among frequencies, and strong correlations between the ASSR amplitude and the stimulus level (p < 0.05). The ASSR in sound field testing was found to yield hearing threshold estimates deemed to be reasonably well correlated with behaviorally assessed thresholds.

  9. Optical Sensing of the Fatigue Damage State of CFRP under Realistic Aeronautical Load Sequences

    PubMed Central

    Zuluaga-Ramírez, Pablo; Arconada, Álvaro; Frövel, Malte; Belenguer, Tomás; Salazar, Félix

    2015-01-01

    We present an optical sensing methodology to estimate the fatigue damage state of structures made of carbon fiber reinforced polymer (CFRP), by measuring variations on the surface roughness. Variable amplitude loads (VAL), which represent realistic loads during aeronautical missions of fighter aircraft (FALSTAFF) have been applied to coupons until failure. Stiffness degradation and surface roughness variations have been measured during the life of the coupons obtaining a Pearson correlation of 0.75 between both variables. The data were compared with a previous study for Constant Amplitude Load (CAL) obtaining similar results. Conclusions suggest that the surface roughness measured in strategic zones is a useful technique for structural health monitoring of CFRP structures, and that it is independent of the type of load applied. Surface roughness can be measured in the field by optical techniques such as speckle, confocal perfilometers and interferometry, among others. PMID:25760056

  10. Seismic and acoustic signal identification algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LADD,MARK D.; ALAM,M. KATHLEEN; SLEEFE,GERARD E.

    2000-04-03

    This paper will describe an algorithm for detecting and classifying seismic and acoustic signals for unattended ground sensors. The algorithm must be computationally efficient and continuously process a data stream in order to establish whether or not a desired signal has changed state (turned-on or off). The paper will focus on describing a Fourier based technique that compares the running power spectral density estimate of the data to a predetermined signature in order to determine if the desired signal has changed state. How to establish the signature and the detection thresholds will be discussed as well as the theoretical statisticsmore » of the algorithm for the Gaussian noise case with results from simulated data. Actual seismic data results will also be discussed along with techniques used to reduce false alarms due to the inherent nonstationary noise environments found with actual data.« less

  11. An Advanced Framework for Improving Situational Awareness in Electric Power Grid Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Zhou, Ning

    With the deployment of new smart grid technologies and the penetration of renewable energy in power systems, significant uncertainty and variability is being introduced into power grid operation. Traditionally, the Energy Management System (EMS) operates the power grid in a deterministic mode, and thus will not be sufficient for the future control center in a stochastic environment with faster dynamics. One of the main challenges is to improve situational awareness. This paper reviews the current status of power grid operation and presents a vision of improving wide-area situational awareness for a future control center. An advanced framework, consisting of parallelmore » state estimation, state prediction, parallel contingency selection, parallel contingency analysis, and advanced visual analytics, is proposed to provide capabilities needed for better decision support by utilizing high performance computing (HPC) techniques and advanced visual analytic techniques. Research results are presented to support the proposed vision and framework.« less

  12. Terahertz Measurement of the Water Content Distribution in Wood Materials

    NASA Astrophysics Data System (ADS)

    Bensalem, M.; Sommier, A.; Mindeguia, J. C.; Batsale, J. C.; Pradere, C.

    2018-02-01

    Recently, THz waves have been shown to be an effective technique for investigating the water diffusion within porous media, such as biomaterial or insulation materials. This applicability is due to the sufficient resolution for such applications and the safe levels of radiation. This study aims to achieve contactless absolute water content measurements at a steady state case in semi-transparent solids (wood) using a transmittance THz wave range setup. First, a calibration method is developed to validate an analytical model based on the Beer-Lambert law, linking the absorption coefficient, the density of the solid, and its water content. Then, an estimation of the water content on a local scale in a transient-state case (drying) is performed. This study shows that THz waves are an effective contactless, safe, and low-cost technique for the measurement of water content in a porous medium, such as wood.

  13. Computer-aided boundary delineation of agricultural lands

    NASA Technical Reports Server (NTRS)

    Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt

    1989-01-01

    The National Agricultural Statistics Service of the United States Department of Agriculture (USDA) presently uses labor-intensive aerial photographic interpretation techniques to divide large geographical areas into manageable-sized units for estimating domestic crop and livestock production. Prototype software, the computer-aided stratification (CAS) system, was developed to automate the procedure, and currently runs on a Sun-based image processing system. With a background display of LANDSAT Thematic Mapper and United States Geological Survey Digital Line Graph data, the operator uses a cursor to delineate agricultural areas, called sampling units, which are assigned to strata of land-use and land-cover types. The resultant stratified sampling units are used as input into subsequent USDA sampling procedures. As a test, three counties in Missouri were chosen for application of the CAS procedures. Subsequent analysis indicates that CAS was five times faster in creating sampling units than the manual techniques were.

  14. Whitecap coverage from aerial photography

    NASA Technical Reports Server (NTRS)

    Austin, R. W.

    1970-01-01

    A program for determining the feasibility of deriving sea surface wind speeds by remotely sensing ocean surface radiances in the nonglitter regions is discussed. With a knowledge of the duration and geographical extent of the wind field, information about the conventional sea state may be derived. The use of optical techniques for determining sea state has obvious limitations. For example, such means can be used only in daylight and only when a clear path of sight is available between the sensor and the surface. However, sensors and vehicles capable of providing the data needed for such techniques are planned for the near future; therefore, a secondary or backup capability can be provided with little added effort. The information currently being sought regarding white water coverage is also of direct interest to those working with passive microwave systems, the study of energy transfer between winds and ocean currents, the aerial estimation of wind speeds, and many others.

  15. Models of Fate and Transport of Pollutants in Surface Waters

    NASA Astrophysics Data System (ADS)

    Okome, Gloria Eloho

    There is the need to answer very crucial questions of "what happens to pollutants in surface waters?" This question must be answered to determine the factors controlling fate and transport of chemicals and their evolutionary state in surface waters. Monitoring and experimental methods are used in establishing the environmental states. These measurements are used with the known scientific principles to identify processes and to estimate the future environmental conditions. Conceptual and computational models are needed to analyze environmental processes by applying the knowledge gained from experimentation and theory. Usually, a computational framework includes the mathematics and the physics of the phenomenon, and the measured characteristics to model pollutants interactions and transport in surface water. However, under certain conditions, the complexity of the situation in the actual environment precludes the utilization of these techniques. Pollutants in several forms: Nitrogen (Nitrate, Nitrite, Kjeldhal Nitrogen and Ammonia), Phosphorus (orthophosphate and total phosphorus), bacteria (E-coli and Fecal coliform), Salts (Chloride and Sulfate) are chosen to follow for this research. The objective of this research is to model the fate and transport of these pollutants in non-ideal conditions of surface water measurements and to develop computational methods to forecast their fate and transport. In an environment of extreme drought such as in the Brazos River basin, where small streams flow intermittently, there is added complexity due to the absence of regularly sampled data. The usual modeling techniques are no longer applicable because of sparse measurements in space and time. Still, there is a need to estimate the conditions of the environment from the information that is present. Alternative methods for this estimation must be devised and applied to this situation, which is the task of this dissertation. This research devices a forecasting technique that is based upon sparse data. The method uses the equations of functions that fit the time series data for pollutants at each water quality monitoring stations to interpolate and extrapolate the data and to make estimates of present and future pollution levels. This method was applied to data obtained from the Leon River watershed (Indian creek) and Navasota River.

  16. V and V of ISHM Software for Space Exploration

    NASA Technical Reports Server (NTRS)

    Markosian, Lawrence; Feather, Martin, S.; Brinza, David; Figueroa, F.

    2005-01-01

    NASA has established a far-reaching and long-term program for robotic and manned exploration of the solar system, beginning with missions to the moon and Mars. The Crew Transportation System (CTS), a key system for space exploration, imposes four requirements' that ISHM addresses. These requirements have a wide range of implications for V&V and certification of ISHM. There is a range of time-criticality for ISHM actions, from prognostication, which is often (but not always) non-time-critical, to time-critical state estimation and system management under off-nominal emergency conditions. These are externally imposed requirements on ISHM that are subject to V&V. - In addition, a range of techniques are needed to implement an ISHM. The approaches to ISHM are described elsewhere. These approaches range from well-understood algorithms for low-level data analysis, validation and reporting, to AI techniques for state estimation and planning. The range of techniques, and specifically the use of AI techniques such as reasoning under uncertainty and mission planning (and re-planning), implies that several V&V approaches may be required. Depending on the ISHM architecture, traditional testing approaches may be adequate for some ISHM functionality. The AI-based approaches to reasoning under uncertainty, model-based reasoning, and planning share characteristics typical of other complex software systems, but they also have characteristics that set them apart and challenge standard V&V techniques. The range of possible solutions to the overall ISHM problem impose internal challenges to V&V. The V&V challenges increase when hard real-time constraints are imposed for time-critical functionality. For example, there is an external requirement that impending catastrophic failure of the Launch Vehicle (LV) at launch time be detected and life-saving action be taken within two seconds. In this paper we outline the challenges for ISHM V&V, existing approaches and analogs in other software application areas, and possible new approaches to the V&V challenges for space exploration ISHM.

  17. Replica exchange enveloping distribution sampling (RE-EDS): A robust method to estimate multiple free-energy differences from a single simulation.

    PubMed

    Sidler, Dominik; Schwaninger, Arthur; Riniker, Sereina

    2016-10-21

    In molecular dynamics (MD) simulations, free-energy differences are often calculated using free energy perturbation or thermodynamic integration (TI) methods. However, both techniques are only suited to calculate free-energy differences between two end states. Enveloping distribution sampling (EDS) presents an attractive alternative that allows to calculate multiple free-energy differences in a single simulation. In EDS, a reference state is simulated which "envelopes" the end states. The challenge of this methodology is the determination of optimal reference-state parameters to ensure equal sampling of all end states. Currently, the automatic determination of the reference-state parameters for multiple end states is an unsolved issue that limits the application of the methodology. To resolve this, we have generalised the replica-exchange EDS (RE-EDS) approach, introduced by Lee et al. [J. Chem. Theory Comput. 10, 2738 (2014)] for constant-pH MD simulations. By exchanging configurations between replicas with different reference-state parameters, the complexity of the parameter-choice problem can be substantially reduced. A new robust scheme to estimate the reference-state parameters from a short initial RE-EDS simulation with default parameters was developed, which allowed the calculation of 36 free-energy differences between nine small-molecule inhibitors of phenylethanolamine N-methyltransferase from a single simulation. The resulting free-energy differences were in excellent agreement with values obtained previously by TI and two-state EDS simulations.

  18. Real-time flutter identification

    NASA Technical Reports Server (NTRS)

    Roy, R.; Walker, R.

    1985-01-01

    The techniques and a FORTRAN 77 MOdal Parameter IDentification (MOPID) computer program developed for identification of the frequencies and damping ratios of multiple flutter modes in real time are documented. Physically meaningful model parameterization was combined with state of the art recursive identification techniques and applied to the problem of real time flutter mode monitoring. The performance of the algorithm in terms of convergence speed and parameter estimation error is demonstrated for several simulated data cases, and the results of actual flight data analysis from two different vehicles are presented. It is indicated that the algorithm is capable of real time monitoring of aircraft flutter characteristics with a high degree of reliability.

  19. Fission Fragment Studies by Gamma-Ray Spectrometry with the Mass Separator Lohengrin

    NASA Astrophysics Data System (ADS)

    Materna, T.; Amouroux, C.; Bail, A.; Bideau, A.; Chabod, S.; Faust, H.; Capellan, N.; Kessedjian, G.; Köster, U.; Letourneau, A.; Litaize, O.; Martin, F.; Mathieu, L.; Méplan, O.; Panebianco, S.; Régis, J.-M.; Rudigier, M.; Sage, C.; Serot, O.; Urban, W.

    2014-09-01

    A gamma spectrometric technique was implemented at the exit of the fission fragment separator of the ILL. It allows a precise measurement of isotopic yields of most important actinides in the heavy fragment region by an unambiguous identification of the nuclear charge of the fragments selected by the mass spectrometer. The status of the project and last results are reviewed. A spin-off of this activity is the identification of unknown nanosecond isomers in exotic nuclei through the observation of a disturbed ionic charge distribution. This technique has been improved to provide an estimation of the lifetime of the isomeric state.

  20. Efficient continuous-variable state tomography using Padua points

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    Further development of quantum technologies calls for efficient characterization methods for quantum systems. While recent work has focused on discrete systems of qubits, much remains to be done for continuous-variable systems such as a microwave mode in a cavity. We introduce a novel technique to reconstruct the full Husimi Q or Wigner function from measurements done at the Padua points in phase space, the optimal sampling points for interpolation in 2D. Our technique not only reduces the number of experimental measurements, but remarkably, also allows for the direct estimation of any density matrix element in the Fock basis, including off-diagonal elements. OLC acknowledges financial support from NSERC.

Top